| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-oauth-apiserver |
api |
ClusterIPNotAllocated |
Cluster IP [IPv4]:172.30.222.225 is not allocated; repairing | ||
openstack |
cinder-7ba05-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-volume-lvm-iscsi-0 to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-866dc4744-hzrg4 to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-6f69995874-nm9nx |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-6f69995874-nm9nx to master-0 | ||
openshift-machine-api |
ironic-proxy-kc5xl |
Scheduled |
Successfully assigned openshift-machine-api/ironic-proxy-kc5xl to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-api |
machine-api-operator-6fbb6cf6f9-qx75g |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-6fbb6cf6f9-qx75g to master-0 | ||
openshift-cloud-credential-operator |
cloud-credential-operator-744f9dbf77-s7ts2 |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-s7ts2 to master-0 | ||
openshift-machine-api |
metal3-546c754db-8r9wh |
Scheduled |
Successfully assigned openshift-machine-api/metal3-546c754db-8r9wh to master-0 | ||
cert-manager |
cert-manager-545d4d4674-29rbn |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-29rbn to master-0 | ||
sushy-emulator |
sushy-emulator-59477995f9-w2dvk |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-59477995f9-w2dvk to master-0 | ||
sushy-emulator |
nova-console-recorder-6d7748fc8c-9phbj |
Scheduled |
Successfully assigned sushy-emulator/nova-console-recorder-6d7748fc8c-9phbj to master-0 | ||
sushy-emulator |
nova-console-poller-676c49b655-wglrh |
Scheduled |
Successfully assigned sushy-emulator/nova-console-poller-676c49b655-wglrh to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-2pmjv to master-0 | ||
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5znsj to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-d6b694c5-j5ggz to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-z82hq |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-z82hq to master-0 | ||
openstack-operators |
swift-operator-controller-manager-c674c5965-65d6b |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-c674c5965-65d6b to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bbgx4 to master-0 | ||
openstack-operators |
placement-operator-controller-manager-5784578c99-4tjlx |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-5784578c99-4tjlx to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-884679f54-7fq2b |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-884679f54-7fq2b to master-0 | ||
openstack-operators |
openstack-operator-index-k889w |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-k889w to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-86bd8996f6-8hx4g to master-0 | ||
openstack-operators |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-b85c4d696-8qpd5 to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dzhg7 to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-mgklh |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-mgklh to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-5b9f45d989-jv72h to master-0 | ||
openstack-operators |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-5d488d59fb-pw2xk to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-767865f676-r78pl |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-767865f676-r78pl to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-67ccfc9778-s5trr to master-0 | ||
openstack-operators |
manila-operator-controller-manager-55f864c847-6n7n9 |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-55f864c847-6n7n9 to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-768b96df4c-kh9rb to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-6f787dddc9-qlfpx to master-0 | ||
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-7dd6bb94c9-xmlj9 to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-8464cc45fb-b8s4c to master-0 | ||
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-67dd5f86f5-ft2mk to master-0 | ||
openshift-machine-api |
metal3-baremetal-operator-78474bdc48-lpxgr |
Scheduled |
Successfully assigned openshift-machine-api/metal3-baremetal-operator-78474bdc48-lpxgr to master-0 | ||
openshift-marketplace |
redhat-operators-zpvpd |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-zpvpd to master-0 | ||
openshift-marketplace |
redhat-marketplace-wzz6n |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-wzz6n to master-0 | ||
openstack-operators |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-79df6bcc97-sq7cg to master-0 | ||
openstack-operators |
designate-operator-controller-manager-588d4d986b-lmp5n |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-588d4d986b-lmp5n to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-8d58dc466-zvf6m to master-0 | ||
openshift-machine-api |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Scheduled |
Successfully assigned openshift-machine-api/metal3-image-customization-7b5d8dfcfd-gjzrj to master-0 | ||
openshift-storage |
lvms-operator-c6dbd8b78-6p8rh |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-c6dbd8b78-6p8rh to master-0 | ||
openshift-network-diagnostics |
network-check-source-b4bf74f6-wqvfk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-authentication |
oauth-openshift-69bfd98cf-4dhhm |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-69bfd98cf-4dhhm to master-0 | ||
openshift-storage |
vg-manager-jzfd5 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-jzfd5 to master-0 | ||
openstack-operators |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Scheduled |
Successfully assigned openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openstack-operators |
barbican-operator-controller-manager-59bc569d95-j929h |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-59bc569d95-j929h to master-0 | ||
openstack-operators |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Scheduled |
Successfully assigned openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv to master-0 | ||
openshift-nmstate |
nmstate-webhook-5f558f5558-5wgm6 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-5f558f5558-5wgm6 to master-0 | ||
openstack |
swift-ring-rebalance-l8hw9 |
Scheduled |
Successfully assigned openstack/swift-ring-rebalance-l8hw9 to master-0 | ||
openstack |
swift-proxy-77dc968fc8-nnkkj |
Scheduled |
Successfully assigned openstack/swift-proxy-77dc968fc8-nnkkj to master-0 | ||
openstack |
root-account-create-update-k88tf |
Scheduled |
Successfully assigned openstack/root-account-create-update-k88tf to master-0 | ||
openstack |
root-account-create-update-dh5fs |
Scheduled |
Successfully assigned openstack/root-account-create-update-dh5fs to master-0 | ||
openstack |
rabbitmq-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-server-0 to master-0 | ||
openstack |
rabbitmq-cell1-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0 | ||
openstack |
placement-f95e-account-create-update-gph65 |
Scheduled |
Successfully assigned openstack/placement-f95e-account-create-update-gph65 to master-0 | ||
openstack |
placement-db-sync-2flmr |
Scheduled |
Successfully assigned openstack/placement-db-sync-2flmr to master-0 | ||
openstack |
placement-db-create-5bqq7 |
Scheduled |
Successfully assigned openstack/placement-db-create-5bqq7 to master-0 | ||
openstack |
placement-687479ff9d-8shw8 |
Scheduled |
Successfully assigned openstack/placement-687479ff9d-8shw8 to master-0 | ||
openshift-ingress |
router-default-7dcf5569b5-4cst9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
metallb-system |
speaker-jkzd2 |
Scheduled |
Successfully assigned metallb-system/speaker-jkzd2 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-754b74fdf5-vvbj2 to master-0 | ||
metallb-system |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-6d7b76b756-hw274 to master-0 | ||
openshift-console |
console-5fdb5b65cd-fdkqt |
Scheduled |
Successfully assigned openshift-console/console-5fdb5b65cd-fdkqt to master-0 | ||
metallb-system |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-bcc4b6f68-sfpc9 to master-0 | ||
metallb-system |
frr-k8s-dttqv |
Scheduled |
Successfully assigned metallb-system/frr-k8s-dttqv to master-0 | ||
openstack |
placement-67c9b9475d-ksb2w |
Scheduled |
Successfully assigned openstack/placement-67c9b9475d-ksb2w to master-0 | ||
openstack |
ovsdbserver-sb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-sb-0 to master-0 | ||
openstack |
ovsdbserver-nb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-nb-0 to master-0 | ||
openstack |
ovn-northd-0 |
Scheduled |
Successfully assigned openstack/ovn-northd-0 to master-0 | ||
openstack |
ovn-controller-ovs-sl66q |
Scheduled |
Successfully assigned openstack/ovn-controller-ovs-sl66q to master-0 | ||
openstack |
ovn-controller-metrics-7dlz8 |
Scheduled |
Successfully assigned openstack/ovn-controller-metrics-7dlz8 to master-0 | ||
openstack |
ovn-controller-m68fw |
Scheduled |
Successfully assigned openstack/ovn-controller-m68fw to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
metallb-system |
controller-7bb4cc7c98-jkh97 |
Scheduled |
Successfully assigned metallb-system/controller-7bb4cc7c98-jkh97 to master-0 | ||
openshift-controller-manager |
controller-manager-57bfdb854-c5vtx |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-57bfdb854-c5vtx to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp to master-0 | ||
openshift-operators |
perses-operator-f44656786-v74wx |
Scheduled |
Successfully assigned openshift-operators/perses-operator-f44656786-v74wx to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-55f5cd545d-pkh9v |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-55f5cd545d-pkh9v to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-57dc475b7c-7h2xd |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-operators |
observability-operator-6dd7dd855f-lm5gw |
Scheduled |
Successfully assigned openshift-operators/observability-operator-6dd7dd855f-lm5gw to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-86f58fcf4-dlgsc |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-86f58fcf4-dlgsc to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-57dc475b7c-7h2xd |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-57dc475b7c-7h2xd to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 to master-0 | ||
openshift-operators |
obo-prometheus-operator-8ff7d675-wdrhg |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-8ff7d675-wdrhg to master-0 | ||
openshift-nmstate |
nmstate-handler-gns5r |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-gns5r to master-0 | ||
openshift-nmstate |
nmstate-metrics-9b8c8685d-cpgt6 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-9b8c8685d-cpgt6 to master-0 | ||
openshift-nmstate |
nmstate-operator-796d4cfff4-h6jnz |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-796d4cfff4-h6jnz to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-866dc4744-hzrg4 to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-6f69995874-nm9nx |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-6f69995874-nm9nx to master-0 | ||
metallb-system |
speaker-jkzd2 |
Scheduled |
Successfully assigned metallb-system/speaker-jkzd2 to master-0 | ||
openshift-operator-lifecycle-manager |
packageserver-65cccc5599-mhl2j |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-65cccc5599-mhl2j to master-0 | ||
openshift-machine-api |
ironic-proxy-kc5xl |
Scheduled |
Successfully assigned openshift-machine-api/ironic-proxy-kc5xl to master-0 | ||
openshift-machine-api |
machine-api-operator-6fbb6cf6f9-qx75g |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-6fbb6cf6f9-qx75g to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-mgklh |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-mgklh to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-z82hq |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-z82hq to master-0 | ||
openshift-console |
console-54cf565479-phtrp |
Scheduled |
Successfully assigned openshift-console/console-54cf565479-phtrp to master-0 | ||
cert-manager |
cert-manager-545d4d4674-29rbn |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-29rbn to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-2pmjv to master-0 | ||
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5znsj to master-0 | ||
openshift-controller-manager |
controller-manager-5cbdcbd8d7-wz2vj |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-5cbdcbd8d7-wz2vj |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5cbdcbd8d7-wz2vj to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-d6b694c5-j5ggz to master-0 | ||
openstack-operators |
swift-operator-controller-manager-c674c5965-65d6b |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-c674c5965-65d6b to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bbgx4 to master-0 | ||
openstack-operators |
placement-operator-controller-manager-5784578c99-4tjlx |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-5784578c99-4tjlx to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-884679f54-7fq2b |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-884679f54-7fq2b to master-0 | ||
openstack-operators |
openstack-operator-index-k889w |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-k889w to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-86bd8996f6-8hx4g to master-0 | ||
openstack-operators |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-b85c4d696-8qpd5 to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dzhg7 to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-5b9f45d989-jv72h to master-0 | ||
openstack-operators |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-5d488d59fb-pw2xk to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-767865f676-r78pl |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-767865f676-r78pl to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-67ccfc9778-s5trr to master-0 | ||
openshift-insights |
insights-operator-68bf6ff9d6-wshz8 |
Scheduled |
Successfully assigned openshift-insights/insights-operator-68bf6ff9d6-wshz8 to master-0 | ||
openstack-operators |
manila-operator-controller-manager-55f864c847-6n7n9 |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-55f864c847-6n7n9 to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-768b96df4c-kh9rb to master-0 | ||
openshift-cluster-samples-operator |
cluster-samples-operator-85f7577d78-mfxr5 |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-mfxr5 to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-6f787dddc9-qlfpx to master-0 | ||
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-7dd6bb94c9-xmlj9 to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-8464cc45fb-b8s4c to master-0 | ||
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-67dd5f86f5-ft2mk to master-0 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-7d87854d6-g96tv |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-g96tv to master-0 | ||
openshift-machine-api |
metal3-546c754db-8r9wh |
Scheduled |
Successfully assigned openshift-machine-api/metal3-546c754db-8r9wh to master-0 | ||
openshift-machine-api |
metal3-baremetal-operator-78474bdc48-lpxgr |
Scheduled |
Successfully assigned openshift-machine-api/metal3-baremetal-operator-78474bdc48-lpxgr to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-59bc569d95-j929h |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-59bc569d95-j929h to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-8d58dc466-zvf6m to master-0 | ||
openstack-operators |
designate-operator-controller-manager-588d4d986b-lmp5n |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-588d4d986b-lmp5n to master-0 | ||
openstack-operators |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-79df6bcc97-sq7cg to master-0 | ||
openshift-console |
console-c75dc494b-tvf5c |
Scheduled |
Successfully assigned openshift-console/console-c75dc494b-tvf5c to master-0 | ||
openshift-machine-api |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Scheduled |
Successfully assigned openshift-machine-api/metal3-image-customization-7b5d8dfcfd-gjzrj to master-0 | ||
openshift-machine-config-operator |
machine-config-daemon-hgc52 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-hgc52 to master-0 | ||
openshift-console |
console-7988f8bb7-j9w48 |
Scheduled |
Successfully assigned openshift-console/console-7988f8bb7-j9w48 to master-0 | ||
openshift-marketplace |
community-operators-wqngb |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-wqngb to master-0 | ||
openshift-marketplace |
certified-operators-tkx45 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-tkx45 to master-0 | ||
openshift-marketplace |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Scheduled |
Successfully assigned openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf to master-0 | ||
openshift-marketplace |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Scheduled |
Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 to master-0 | ||
openshift-machine-config-operator |
machine-config-operator-84d549f6d5-fdwf5 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-84d549f6d5-fdwf5 to master-0 | ||
openstack |
swift-storage-0 |
Scheduled |
Successfully assigned openstack/swift-storage-0 to master-0 | ||
openshift-image-registry |
node-ca-qd25m |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-qd25m to master-0 | ||
openshift-marketplace |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Scheduled |
Successfully assigned openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
openstack |
openstack-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-galera-0 to master-0 | ||
openstack |
openstack-cell1-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-cell1-galera-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-db-create-jshcc |
Scheduled |
Successfully assigned openstack/nova-cell1-db-create-jshcc to master-0 | ||
openstack |
nova-cell1-conductor-db-sync-s6rxr |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-db-sync-s6rxr to master-0 | ||
openstack |
nova-cell1-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-0 to master-0 | ||
openstack |
nova-cell1-cell-mapping-nzzbt |
Scheduled |
Successfully assigned openstack/nova-cell1-cell-mapping-nzzbt to master-0 | ||
openstack |
nova-cell1-5ec6-account-create-update-fn8fv |
Scheduled |
Successfully assigned openstack/nova-cell1-5ec6-account-create-update-fn8fv to master-0 | ||
openstack |
nova-cell0-ea37-account-create-update-c8nf7 |
Scheduled |
Successfully assigned openstack/nova-cell0-ea37-account-create-update-c8nf7 to master-0 | ||
openstack |
nova-cell0-db-create-zlb6q |
Scheduled |
Successfully assigned openstack/nova-cell0-db-create-zlb6q to master-0 | ||
openstack |
nova-cell0-conductor-db-sync-x9mns |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-db-sync-x9mns to master-0 | ||
openstack |
nova-cell0-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-0 to master-0 | ||
openstack |
nova-cell0-cell-mapping-t8sfd |
Scheduled |
Successfully assigned openstack/nova-cell0-cell-mapping-t8sfd to master-0 | ||
openstack |
nova-api-db-create-j7tk6 |
Scheduled |
Successfully assigned openstack/nova-api-db-create-j7tk6 to master-0 | ||
openstack |
nova-api-8395-account-create-update-8nzrz |
Scheduled |
Successfully assigned openstack/nova-api-8395-account-create-update-8nzrz to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-5c6485487f-cscz5 |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-5c6485487f-cscz5 to master-0 | ||
openstack |
neutron-db-sync-hbzpf |
Scheduled |
Successfully assigned openstack/neutron-db-sync-hbzpf to master-0 | ||
openstack |
neutron-db-create-wjhhn |
Scheduled |
Successfully assigned openstack/neutron-db-create-wjhhn to master-0 | ||
openshift-marketplace |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Scheduled |
Successfully assigned openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-86f58fcf4-dlgsc |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-86f58fcf4-dlgsc to master-0 | ||
openshift-nmstate |
nmstate-handler-gns5r |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-gns5r to master-0 | ||
openshift-nmstate |
nmstate-metrics-9b8c8685d-cpgt6 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-9b8c8685d-cpgt6 to master-0 | ||
openshift-nmstate |
nmstate-operator-796d4cfff4-h6jnz |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-796d4cfff4-h6jnz to master-0 | ||
openstack |
neutron-886c-account-create-update-24ntn |
Scheduled |
Successfully assigned openstack/neutron-886c-account-create-update-24ntn to master-0 | ||
openstack |
neutron-85f97d8d64-dfwgh |
Scheduled |
Successfully assigned openstack/neutron-85f97d8d64-dfwgh to master-0 | ||
openstack |
neutron-7cd95f9d78-s2fkv |
Scheduled |
Successfully assigned openstack/neutron-7cd95f9d78-s2fkv to master-0 | ||
openstack |
neutron-77db675565-g4zz2 |
Scheduled |
Successfully assigned openstack/neutron-77db675565-g4zz2 to master-0 | ||
openstack |
memcached-0 |
Scheduled |
Successfully assigned openstack/memcached-0 to master-0 | ||
openstack |
keystone-db-sync-vk8gz |
Scheduled |
Successfully assigned openstack/keystone-db-sync-vk8gz to master-0 | ||
openstack |
keystone-db-create-wpbkz |
Scheduled |
Successfully assigned openstack/keystone-db-create-wpbkz to master-0 | ||
openshift-nmstate |
nmstate-webhook-5f558f5558-5wgm6 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-5f558f5558-5wgm6 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-754b74fdf5-vvbj2 to master-0 | ||
openstack |
keystone-cron-29565301-skqb8 |
Scheduled |
Successfully assigned openstack/keystone-cron-29565301-skqb8 to master-0 | ||
openstack |
keystone-cron-29565241-vpcdg |
Scheduled |
Successfully assigned openstack/keystone-cron-29565241-vpcdg to master-0 | ||
openstack |
keystone-bootstrap-xl426 |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-xl426 to master-0 | ||
metallb-system |
controller-7bb4cc7c98-jkh97 |
Scheduled |
Successfully assigned metallb-system/controller-7bb4cc7c98-jkh97 to master-0 | ||
openstack |
keystone-bootstrap-hlqwd |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-hlqwd to master-0 | ||
openstack |
keystone-6b44d66bc9-5zxbb |
Scheduled |
Successfully assigned openstack/keystone-6b44d66bc9-5zxbb to master-0 | ||
openstack |
keystone-16fb-account-create-update-8cp5c |
Scheduled |
Successfully assigned openstack/keystone-16fb-account-create-update-8cp5c to master-0 | ||
openstack |
glance-fc3e-account-create-update-btzjb |
Scheduled |
Successfully assigned openstack/glance-fc3e-account-create-update-btzjb to master-0 | ||
openstack |
glance-db-sync-zxw2c |
Scheduled |
Successfully assigned openstack/glance-db-sync-zxw2c to master-0 | ||
openstack |
glance-db-create-nj4vf |
Scheduled |
Successfully assigned openstack/glance-db-create-nj4vf to master-0 | ||
openstack |
glance-3a5fd-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-3a5fd-default-internal-api-0 to master-0 | ||
openstack |
glance-3a5fd-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-3a5fd-default-internal-api-0 to master-0 | ||
openstack |
glance-3a5fd-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-3a5fd-default-internal-api-0 to master-0 | ||
openstack |
glance-3a5fd-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-3a5fd-default-external-api-0 to master-0 | ||
openstack |
glance-3a5fd-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-3a5fd-default-external-api-0 to master-0 | ||
metallb-system |
frr-k8s-dttqv |
Scheduled |
Successfully assigned metallb-system/frr-k8s-dttqv to master-0 | ||
openstack |
glance-3a5fd-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-3a5fd-default-external-api-0 to master-0 | ||
openstack |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Scheduled |
Successfully assigned openstack/edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm to master-0 | ||
openstack |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Scheduled |
Successfully assigned openstack/edpm-b-provisionserver-checksum-discovery-x7j8z to master-0 | ||
openstack |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Scheduled |
Successfully assigned openstack/edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 to master-0 | ||
openstack |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Scheduled |
Successfully assigned openstack/edpm-a-provisionserver-checksum-discovery-lfsjb to master-0 | ||
openstack |
dnsmasq-dns-d8f46bbdf-cnrwt |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-d8f46bbdf-cnrwt to master-0 | ||
openstack |
dnsmasq-dns-b4cc6f549-55sdk |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-b4cc6f549-55sdk to master-0 | ||
openstack |
dnsmasq-dns-9748bd58f-s2fbq |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-9748bd58f-s2fbq to master-0 | ||
openstack |
dnsmasq-dns-86659cf465-r6c25 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-86659cf465-r6c25 to master-0 | ||
openstack |
dnsmasq-dns-85f88f897-5c5kd |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-85f88f897-5c5kd to master-0 | ||
openstack |
dnsmasq-dns-849fd5d677-sdj8j |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-849fd5d677-sdj8j to master-0 | ||
openstack |
dnsmasq-dns-8476fd89bc-6bm4q |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-8476fd89bc-6bm4q to master-0 | ||
openstack |
dnsmasq-dns-7cb6bf676c-xlvsw |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7cb6bf676c-xlvsw to master-0 | ||
openstack |
dnsmasq-dns-7bb8ffc699-2qz2r |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7bb8ffc699-2qz2r to master-0 | ||
openstack |
dnsmasq-dns-76849d6659-8tphm |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-76849d6659-8tphm to master-0 | ||
openstack |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6ff8fd9d5c-qk9z4 to master-0 | ||
openstack |
dnsmasq-dns-6ddd7f485-2r6bg |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6ddd7f485-2r6bg to master-0 | ||
openstack |
dnsmasq-dns-6897ccd865-b6qgp |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6897ccd865-b6qgp to master-0 | ||
openstack |
dnsmasq-dns-685c76cf85-cdfrk |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-685c76cf85-cdfrk to master-0 | ||
openstack |
dnsmasq-dns-6796764987-gtg4x |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6796764987-gtg4x to master-0 | ||
openstack |
dnsmasq-dns-5bf8b865dc-vtxcj |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5bf8b865dc-vtxcj to master-0 | ||
openstack |
dnsmasq-dns-59697cf549-dzw8p |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-59697cf549-dzw8p to master-0 | ||
openstack |
dnsmasq-dns-5687765f45-jhnth |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5687765f45-jhnth to master-0 | ||
openstack |
cinder-db-create-8nppp |
Scheduled |
Successfully assigned openstack/cinder-db-create-8nppp to master-0 | ||
openstack |
cinder-7ba05-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-volume-lvm-iscsi-0 to master-0 | ||
metallb-system |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-bcc4b6f68-sfpc9 to master-0 | ||
openshift-marketplace |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 to master-0 | ||
openstack |
cinder-7ba05-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-scheduler-0 to master-0 | ||
openstack |
cinder-7ba05-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-scheduler-0 to master-0 | ||
openstack |
cinder-7ba05-db-sync-jdc2m |
Scheduled |
Successfully assigned openstack/cinder-7ba05-db-sync-jdc2m to master-0 | ||
openstack |
cinder-7ba05-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-backup-0 to master-0 | ||
openstack |
cinder-7ba05-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-backup-0 to master-0 | ||
openstack |
cinder-7ba05-api-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-api-0 to master-0 | ||
openstack |
cinder-7ba05-api-0 |
Scheduled |
Successfully assigned openstack/cinder-7ba05-api-0 to master-0 | ||
openstack |
cinder-3735-account-create-update-59xbx |
Scheduled |
Successfully assigned openstack/cinder-3735-account-create-update-59xbx to master-0 | ||
metallb-system |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-6d7b76b756-hw274 to master-0 | ||
openshift-storage |
vg-manager-jzfd5 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-jzfd5 to master-0 | ||
openshift-storage |
lvms-operator-c6dbd8b78-6p8rh |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-c6dbd8b78-6p8rh to master-0 | ||
openshift-operators |
perses-operator-f44656786-v74wx |
Scheduled |
Successfully assigned openshift-operators/perses-operator-f44656786-v74wx to master-0 | ||
openshift-operators |
observability-operator-6dd7dd855f-lm5gw |
Scheduled |
Successfully assigned openshift-operators/observability-operator-6dd7dd855f-lm5gw to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 to master-0 | ||
openshift-operators |
obo-prometheus-operator-8ff7d675-wdrhg |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-8ff7d675-wdrhg to master-0 | ||
kube-system |
Required control plane pods have been created | ||||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_2cb2ca9c-9ecf-463c-840a-ff501bc21e91 became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_5df77ffd-7c6e-42d6-822e-cd61c7b934d5 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_20a54124-f503-4a41-ba1f-d770eb7fde2c became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_521d7067-4bb2-42d7-a99d-3c5532c1ab1c became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-gn85g | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_e285ce79-2756-45db-86b8-b6b1c1f4e197 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_521d7067-4bb2-42d7-a99d-3c5532c1ab1c stopped leading | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_73e61ee9-a8a0-441f-ae2c-d9e4f38db428 became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-56d8475767 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_11ab1bc7-3138-4c02-b1bc-73ccd6f87898 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-67dcd4998 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-8c94f4649 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-ff989d6cc to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-6bb5bfb6fd to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-d65958b8 to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-7bd846bfc4 to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-dddff6458 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-9c5679d8f to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-b865698dc to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-89ccd998f to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-8544cbcf9c to 1 | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-5885bfd7f4 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
| (x9) | assisted-installer |
default-scheduler |
assisted-installer-controller-gn85g |
FailedScheduling |
no nodes available to schedule pods |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-8c94f4649 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-8c94f4649-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-67dcd4998 |
FailedCreate |
Error creating: pods "cluster-olm-operator-67dcd4998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-ff989d6cc |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-ff989d6cc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-6bb5bfb6fd |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-6bb5bfb6fd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-7bd846bfc4 |
FailedCreate |
Error creating: pods "network-operator-7bd846bfc4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-dddff6458 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-dddff6458-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-d65958b8 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-d65958b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-b865698dc |
FailedCreate |
Error creating: pods "service-ca-operator-b865698dc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-9c5679d8f |
FailedCreate |
Error creating: pods "dns-operator-9c5679d8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-598fbc5f8f to 1 | |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-89ccd998f |
FailedCreate |
Error creating: pods "marketplace-operator-89ccd998f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-5f5d689c6b to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-598fbc5f8f to 1 | |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-8544cbcf9c |
FailedCreate |
Error creating: pods "etcd-operator-8544cbcf9c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-58845fbb57 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-7b95f86987 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-58845fbb57 to 1 | |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5885bfd7f4 |
FailedCreate |
Error creating: pods "authentication-operator-5885bfd7f4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-5549dc66cb to 1 | |
| (x10) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7b95f86987 |
FailedCreate |
Error creating: pods "package-server-manager-7b95f86987-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-8b68b9d9b to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-5c9796789 to 1 | |
| (x10) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-5f5d689c6b |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-5f5d689c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
FailedCreate |
Error creating: pods "cluster-version-operator-56d8475767-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-66b84d69b to 1 | |
| (x9) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-5549dc66cb |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-5549dc66cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-68f85b4d6c to 1 | |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-95bf4f4d to 1 | |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5c9796789 |
FailedCreate |
Error creating: pods "olm-operator-5c9796789-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-66b84d69b |
FailedCreate |
Error creating: pods "ingress-operator-66b84d69b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
Required control plane pods have been created | ||||
| (x11) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-8b68b9d9b |
FailedCreate |
Error creating: pods "kube-apiserver-operator-8b68b9d9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
| (x11) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
| (x9) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-68f85b4d6c |
FailedCreate |
Error creating: pods "catalog-operator-68f85b4d6c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
| (x7) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-95bf4f4d |
FailedCreate |
Error creating: pods "openshift-config-operator-95bf4f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_b94e3182-911d-4f39-8bf6-06b35dec7cc7 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_e5e539d6-5ede-4efe-9e8f-4d01cd69c494 became leader | |
| (x5) | assisted-installer |
default-scheduler |
assisted-installer-controller-gn85g |
FailedScheduling |
no nodes available to schedule pods |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_5e297c7b-cd00-4cf4-81a5-6276877cd7c3 became leader | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
| (x8) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-ff989d6cc |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-ff989d6cc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-8c94f4649 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-8c94f4649-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-6bb5bfb6fd |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-6bb5bfb6fd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-5549dc66cb |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-5549dc66cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-66b84d69b |
FailedCreate |
Error creating: pods "ingress-operator-66b84d69b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-dns-operator |
replicaset-controller |
dns-operator-9c5679d8f |
FailedCreate |
Error creating: pods "dns-operator-9c5679d8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-dddff6458 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-dddff6458-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-8b68b9d9b |
FailedCreate |
Error creating: pods "kube-apiserver-operator-8b68b9d9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-8544cbcf9c |
FailedCreate |
Error creating: pods "etcd-operator-8544cbcf9c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7b95f86987 |
FailedCreate |
Error creating: pods "package-server-manager-7b95f86987-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-network-operator |
replicaset-controller |
network-operator-7bd846bfc4 |
FailedCreate |
Error creating: pods "network-operator-7bd846bfc4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-b865698dc |
FailedCreate |
Error creating: pods "service-ca-operator-b865698dc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-marketplace |
replicaset-controller |
marketplace-operator-89ccd998f |
FailedCreate |
Error creating: pods "marketplace-operator-89ccd998f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-d65958b8 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-d65958b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-5f5d689c6b |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-5f5d689c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-68f85b4d6c |
FailedCreate |
Error creating: pods "catalog-operator-68f85b4d6c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5c9796789 |
FailedCreate |
Error creating: pods "olm-operator-5c9796789-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
FailedCreate |
Error creating: pods "cluster-version-operator-56d8475767-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-dns-operator |
replicaset-controller |
dns-operator-9c5679d8f |
SuccessfulCreate |
Created pod: dns-operator-9c5679d8f-fdxtp | |
| (x9) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5885bfd7f4 |
FailedCreate |
Error creating: pods "authentication-operator-5885bfd7f4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-67dcd4998 |
FailedCreate |
Error creating: pods "cluster-olm-operator-67dcd4998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-8b68b9d9b |
SuccessfulCreate |
Created pod: kube-apiserver-operator-8b68b9d9b-tvm5p | |
openshift-dns-operator |
default-scheduler |
dns-operator-9c5679d8f-fdxtp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-66b84d69b |
SuccessfulCreate |
Created pod: ingress-operator-66b84d69b-pgdrx | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-8c94f4649 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-8c94f4649-xhzf9 | |
| (x9) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-95bf4f4d |
FailedCreate |
Error creating: pods "openshift-config-operator-95bf4f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7b95f86987 |
SuccessfulCreate |
Created pod: package-server-manager-7b95f86987-gltb5 | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-8544cbcf9c |
SuccessfulCreate |
Created pod: etcd-operator-8544cbcf9c-ct498 | |
assisted-installer |
kubelet |
assisted-installer-controller-gn85g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016" | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-68f85b4d6c |
SuccessfulCreate |
Created pod: catalog-operator-68f85b4d6c-j92kd | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
default-scheduler |
network-operator-7bd846bfc4-jxvxl |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-7bd846bfc4-jxvxl to master-0 | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-b865698dc |
SuccessfulCreate |
Created pod: service-ca-operator-b865698dc-wwkqz | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-6bb5bfb6fd |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-58845fbb57-z2869 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-66b84d69b-pgdrx |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-ff989d6cc |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-ff989d6cc-rcnnp | |
assisted-installer |
default-scheduler |
assisted-installer-controller-gn85g |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-gn85g to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-7b95f86987-gltb5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-5c9796789-wjbt2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-89ccd998f |
SuccessfulCreate |
Created pod: marketplace-operator-89ccd998f-6qck2 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5c9796789 |
SuccessfulCreate |
Created pod: olm-operator-5c9796789-wjbt2 | |
openshift-network-operator |
replicaset-controller |
network-operator-7bd846bfc4 |
SuccessfulCreate |
Created pod: network-operator-7bd846bfc4-jxvxl | |
openshift-marketplace |
default-scheduler |
marketplace-operator-89ccd998f-6qck2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-58845fbb57-z2869 | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-dddff6458 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-dddff6458-6fzwb | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-b865698dc-wwkqz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-8b68b9d9b-tvm5p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-5549dc66cb |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-5549dc66cb-dcmsc | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-68f85b4d6c-j92kd |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-5549dc66cb-dcmsc |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-ff989d6cc-rcnnp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-8544cbcf9c-ct498 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-5f5d689c6b |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-5f5d689c6b-dspnb | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-67dcd4998 |
SuccessfulCreate |
Created pod: cluster-olm-operator-67dcd4998-wrdwm | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
SuccessfulCreate |
Created pod: cluster-version-operator-56d8475767-sbhx2 | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-95bf4f4d |
SuccessfulCreate |
Created pod: openshift-config-operator-95bf4f4d-bqqqq | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-598fbc5f8f-wh9q6 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-5885bfd7f4 |
SuccessfulCreate |
Created pod: authentication-operator-5885bfd7f4-z8gbk | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-d65958b8-96qpx |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-95bf4f4d-bqqqq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-5885bfd7f4-z8gbk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-598fbc5f8f-wh9q6 | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-56d8475767-sbhx2 |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-56d8475767-sbhx2 to master-0 | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-d65958b8 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-d65958b8-96qpx | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-67dcd4998-wrdwm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Started |
Started container network-operator | |
assisted-installer |
kubelet |
assisted-installer-controller-gn85g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016" in 5.914s (5.914s including waiting). Image size: 687949580 bytes. | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" in 5.372s (5.372s including waiting). Image size: 621648710 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-gn85g |
Started |
Started container assisted-installer-controller | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Created |
Created container: network-operator | |
assisted-installer |
kubelet |
assisted-installer-controller-gn85g |
Created |
Created container: assisted-installer-controller | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_892dc2e2-939a-43da-8596-949e42adeae3 became leader | |
openshift-network-operator |
default-scheduler |
mtu-prober-cnb74 |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-cnb74 to master-0 | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
openshift-network-operator |
kubelet |
mtu-prober-cnb74 |
Started |
Started container prober | |
openshift-network-operator |
kubelet |
mtu-prober-cnb74 |
Created |
Created container: prober | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-cnb74 | |
openshift-network-operator |
kubelet |
mtu-prober-cnb74 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-tjzdb | |
openshift-multus |
kubelet |
multus-8svct |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-tjzdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-tjzdb |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-tjzdb to master-0 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-8svct | |
openshift-multus |
kubelet |
multus-8svct |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" | |
openshift-multus |
default-scheduler |
multus-8svct |
Scheduled |
Successfully assigned openshift-multus/multus-8svct to master-0 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-8svct | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-tjzdb |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-tjzdb to master-0 | |
openshift-multus |
default-scheduler |
multus-8svct |
Scheduled |
Successfully assigned openshift-multus/multus-8svct to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-p76jz | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-p76jz | |
openshift-multus |
default-scheduler |
network-metrics-daemon-p76jz |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-p76jz to master-0 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-p76jz |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-p76jz to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-mc76b |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5dbbb8b86f to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulCreate |
Created pod: multus-admission-controller-5dbbb8b86f-mc76b | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5dbbb8b86f to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulCreate |
Created pod: multus-admission-controller-5dbbb8b86f-mc76b | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-mc76b |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" in 2.469s (2.469s including waiting). Image size: 528956487 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" in 2.469s (2.469s including waiting). Image size: 528956487 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" in 5.282s (5.282s including waiting). Image size: 683195416 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" in 5.282s (5.282s including waiting). Image size: 683195416 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container cni-plugins | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-57f769d897 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-57f769d897-r75tv | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-57f769d897-r75tv |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-r75tv to master-0 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-22clf |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-22clf to master-0 | |
openshift-multus |
kubelet |
multus-8svct |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" in 12.361s (12.361s including waiting). Image size: 1238100502 bytes. | |
openshift-multus |
kubelet |
multus-8svct |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" in 12.361s (12.361s including waiting). Image size: 1238100502 bytes. | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-57f769d897 to 1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-22clf | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-b4bf74f6 to 1 | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-b4bf74f6-wqvfk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-b4bf74f6 |
SuccessfulCreate |
Created pod: network-check-source-b4bf74f6-wqvfk | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: bond-cni-plugin | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-95w9b | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container bond-cni-plugin | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-95w9b |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-95w9b to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" in 1.766s (1.766s including waiting). Image size: 411587146 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" in 1.766s (1.766s including waiting). Image size: 411587146 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-kqb2h |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-kqb2h to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" in 1.578s (1.578s including waiting). Image size: 407347125 bytes. | |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-kqb2h | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" in 1.578s (1.578s including waiting). Image size: 407347125 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Started |
Started container webhook | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" in 17.685s (17.685s including waiting). Image size: 876160834 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" in 17.685s (17.685s including waiting). Image size: 876160834 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" in 24.809s (24.809s including waiting). Image size: 1637455533 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Started |
Started container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" in 24.661s (24.661s including waiting). Image size: 1637455533 bytes. | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-57f769d897-r75tv became leader | |
openshift-network-node-identity |
master-0_36186c7e-fb88-4e49-9c0d-21a28b36fbfb |
ovnkube-identity |
LeaderElection |
master-0_36186c7e-fb88-4e49-9c0d-21a28b36fbfb became leader | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Created |
Created container: approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" in 17.579s (17.579s including waiting). Image size: 1637455533 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-22clf | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tjzdb |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-22clf |
Started |
Started container sbdb | |
default |
ovnkube-csr-approver-controller |
csr-m7th8 |
CSRApproved |
CSR "csr-m7th8" has been approved | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-95w9b |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wrs54" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-95w9b |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-multus |
kubelet |
multus-8svct |
Created |
Created container: kube-multus |
| (x2) | openshift-multus |
kubelet |
multus-8svct |
Started |
Started container kube-multus |
openshift-multus |
kubelet |
multus-8svct |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-multus |
kubelet |
multus-8svct |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
| (x2) | openshift-multus |
kubelet |
multus-8svct |
Created |
Created container: kube-multus |
| (x2) | openshift-multus |
kubelet |
multus-8svct |
Started |
Started container kube-multus |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-fwjzr |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-fwjzr to master-0 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-fwjzr | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fwjzr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
default |
ovnkube-csr-approver-controller |
csr-fl7bq |
CSRApproved |
CSR "csr-fl7bq" has been approved | |
default |
ovnk-controlplane |
master-0 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0] | |
default |
ovnkube-csr-approver-controller |
csr-gqgdh |
CSRApproved |
CSR "csr-gqgdh" has been approved | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-5c9796789-wjbt2 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-5c9796789-wjbt2 to master-0 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-tvm5p to master-0 | |
openshift-dns-operator |
default-scheduler |
dns-operator-9c5679d8f-fdxtp |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-9c5679d8f-fdxtp to master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-dspnb to master-0 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-d65958b8-96qpx |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-96qpx to master-0 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-66b84d69b-pgdrx |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-66b84d69b-pgdrx to master-0 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-8544cbcf9c-ct498 |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-8544cbcf9c-ct498 to master-0 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-5549dc66cb-dcmsc |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-5549dc66cb-dcmsc to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-68f85b4d6c-j92kd |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-j92kd to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-7b95f86987-gltb5 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-gltb5 to master-0 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-rcnnp to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-z2869 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-58845fbb57-z2869 to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-mc76b |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5dbbb8b86f-mc76b to master-0 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-5885bfd7f4-z8gbk |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-5885bfd7f4-z8gbk to master-0 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-6fzwb to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-z2869 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-58845fbb57-z2869 to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-wh9q6 to master-0 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-67dcd4998-wrdwm |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-wrdwm to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-wh9q6 to master-0 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-95bf4f4d-bqqqq |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-95bf4f4d-bqqqq to master-0 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-89ccd998f-6qck2 |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-89ccd998f-6qck2 to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-mc76b |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5dbbb8b86f-mc76b to master-0 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-b865698dc-wwkqz |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-b865698dc-wwkqz to master-0 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-xhzf9 to master-0 | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-67dcd4998-wrdwm |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" | |
openshift-config-operator |
multus |
openshift-config-operator-95bf4f4d-bqqqq |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-2s58d | |
openshift-authentication-operator |
multus |
authentication-operator-5885bfd7f4-z8gbk |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-z8gbk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-ff989d6cc-rcnnp |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" | |
openshift-service-ca-operator |
multus |
service-ca-operator-b865698dc-wwkqz |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" | |
openshift-network-operator |
kubelet |
iptables-alerter-2s58d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-d65958b8-96qpx |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-8544cbcf9c-ct498 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-8b68b9d9b-tvm5p |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" | |
openshift-network-operator |
default-scheduler |
iptables-alerter-2s58d |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-2s58d to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Created |
Created container: kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Started |
Started container kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-8b68b9d9b-tvm5p_f31ef750-2edb-45d5-9d26-152af4f5906b became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.35" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-mc76b |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-mc76b |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x4) | openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected changed from Unknown to False ("All is well") | |
| (x4) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-dcmsc |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252": pull QPS exceeded | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" | |
openshift-network-operator |
kubelet |
iptables-alerter-2s58d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e": pull QPS exceeded | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Failed |
Error: ErrImagePull | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-z8gbk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Failed |
Error: ErrImagePull | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-InternalLoadBalancerServing-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/bound-service-account-signing-key -n openshift-kube-apiserver: secrets "bound-service-account-signing-key" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Failed |
Error: ErrImagePull | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Failed |
Error: ErrImagePull | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "goaway-chance": []any{string("0")}, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("true")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +Â "shutdown-delay-duration": []any{string("0s")}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "gracefulTerminationDuration": string("15"), +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Started |
Started container copy-catalogd-manifests | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-2s58d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" in 11.692s (11.692s including waiting). Image size: 582154903 bytes. | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-z8gbk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" in 11.672s (11.672s including waiting). Image size: 513221333 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Created |
Created container: copy-catalogd-manifests | |
openshift-network-diagnostics |
multus |
network-check-target-95w9b |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" in 11.672s (11.672s including waiting). Image size: 448042136 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" in 2.456s (2.456s including waiting). Image size: 504625081 bytes. | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5885bfd7f4-z8gbk_4498a836-efd0-46b5-b844-254c009755f1 became leader | |
openshift-network-diagnostics |
kubelet |
network-check-target-95w9b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" in 13.262s (13.262s including waiting). Image size: 507972093 bytes. | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" in 2.462s (2.462s including waiting). Image size: 512274055 bytes. | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.35" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found",Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "" to "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("") | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n" | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.sno.openstack.lab:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
| (x63) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-d5czl" has been approved | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-d5czl" is created for OpenShiftAuthenticatorCertRequester | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-network-operator |
kubelet |
iptables-alerter-2s58d |
Created |
Created container: iptables-alerter | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-network-operator |
kubelet |
iptables-alerter-2s58d |
Started |
Started container iptables-alerter | |
openshift-network-diagnostics |
kubelet |
network-check-target-95w9b |
Started |
Started container network-check-target-container | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-target-95w9b |
Created |
Created container: network-check-target-container | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x7) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-dcmsc |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x7) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" |
| (x7) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n" | |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-d65958b8-96qpx_2d1a1a7c-3221-4270-a6cb-4a276a548d49 became leader | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
| (x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.35" |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7030c5cce"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5004457a"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.35" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-8c94f4649-xhzf9_3bbaad7f-90f8-458a-b74b-2b7e8753dcd2 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw_fe93d9c9-1d88-4363-bee0-2ffb3835b1e1 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/client-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-84f7754698 to 1 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" in 11.847s (11.847s including waiting). Image size: 438654374 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Created |
Created container: openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Started |
Started container openshift-api | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "APIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" in 3.193s (3.193s including waiting). Image size: 506480167 bytes. | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" in 1.274s (1.274s including waiting). Image size: 518384969 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" in 11.85s (11.85s including waiting). Image size: 508888171 bytes. | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/client-ca -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" in 9.85s (9.85s including waiting). Image size: 506395599 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Started |
Started container csi-snapshot-controller-operator | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" in 11.823s (11.823s including waiting). Image size: 508544745 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Started |
Started container copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" in 16.666s (16.666s including waiting). Image size: 495065340 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-dspnb |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well") | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-8487694857-nkvjk |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-8487694857-nkvjk to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-dddff6458-6fzwb_3d967ba6-965a-4993-9d90-f40cf6e8c242 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.35" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-8487694857 |
SuccessfulCreate |
Created pod: migrator-8487694857-nkvjk | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11" | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-8487694857 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-kube-storage-version-migrator |
multus |
migrator-8487694857-nkvjk |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6d9bb777f5 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d9bb777f5-x9r5p |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6d9bb777f5-x9r5p to master-0 | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-5f5d689c6b-dspnb_ebbdf8ae-25b8-4b56-8246-f1ba76dd6004 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d9bb777f5 |
SuccessfulCreate |
Created pod: route-controller-manager-6d9bb777f5-x9r5p | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-ff989d6cc-rcnnp_6ed86e97-eb84-4778-8349-07e052ed4e39 became leader | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from Unknown to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found") | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-b865698dc-wwkqz_8ebed07e-febf-4b1b-a4cb-d9f234eede91 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-8544cbcf9c-ct498_7de58a59-780d-4bf4-a34a-30f149ff65ff became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-64854d9cff to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/16")}, +Â "cluster-name": []any{string("sno-s8rnh")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6d9bb777f5-x9r5p |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6d9bb777f5-x9r5p |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6d9bb777f5-x9r5p |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.35" |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
| (x11) | openshift-controller-manager |
replicaset-controller |
controller-manager-84f7754698 |
FailedCreate |
Error creating: pods "controller-manager-84f7754698-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well"),Progressing changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-5547669f67 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-64854d9cff-dzfgb |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-dzfgb to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.35" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"etcd-pod\" not found") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-64854d9cff |
SuccessfulCreate |
Created pod: csi-snapshot-controller-64854d9cff-dzfgb | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "RevisionControllerDegraded: configmap \"etcd-pod\" not found" to "RevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "RevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]" to "RevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nNodeControllerDegraded: All master nodes are ready",Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0" in 14.841s (14.841s including waiting). Image size: 443272037 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentUpdated |
Updated Deployment.apps/service-ca -n openshift-service-ca because it changed | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-79bc6b8d76 to 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-service-ca |
replicaset-controller |
service-ca-79bc6b8d76 |
SuccessfulCreate |
Created pod: service-ca-79bc6b8d76-xlhg9 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-service-ca |
default-scheduler |
service-ca-79bc6b8d76-xlhg9 |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-79bc6b8d76-xlhg9 to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-58bdf45c89 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6d9bb777f5 to 0 from 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d9bb777f5 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6d9bb777f5-x9r5p | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-58bdf45c89 |
SuccessfulCreate |
Created pod: route-controller-manager-58bdf45c89-nnbc4 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-58bdf45c89-nnbc4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11" in 16.353s (16.353s including waiting). Image size: 511164375 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "controlPlane": map[string]any{"replicas": float64(1)}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" in 16.344s (16.344s including waiting). Image size: 495994673 bytes. | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-controller-manager |
default-scheduler |
controller-manager-866d56f9b-6dc8n |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-866d56f9b-6dc8n to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-controller-manager |
replicaset-controller |
controller-manager-866d56f9b |
SuccessfulCreate |
Created pod: controller-manager-866d56f9b-6dc8n | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-84f7754698 to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-866d56f9b to 1 from 0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-5547669f67 |
SuccessfulCreate |
Created pod: apiserver-5547669f67-dhd9c | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-5547669f67-dhd9c |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-5547669f67-dhd9c to master-0 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-service-ca |
kubelet |
service-ca-79bc6b8d76-xlhg9 |
Created |
Created container: service-ca-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24" | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-64854d9cff-dzfgb |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-95bf4f4d-bqqqq_9baf2bf3-62db-422a-9d80-12179296968c became leader | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6f75697dcf to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-866d56f9b to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-866d56f9b |
SuccessfulDelete |
Deleted pod: controller-manager-866d56f9b-6dc8n | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-866d56f9b-6dc8n |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-58bdf45c89-nnbc4 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-58bdf45c89-nnbc4 to master-0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-58bdf45c89 |
SuccessfulDelete |
Deleted pod: route-controller-manager-58bdf45c89-nnbc4 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6ff4f4c6f6-zm9dj |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap,data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6ff4f4c6f6 |
SuccessfulCreate |
Created pod: route-controller-manager-6ff4f4c6f6-zm9dj | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f75697dcf |
SuccessfulCreate |
Created pod: controller-manager-6f75697dcf-p28rf | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-58bdf45c89 to 0 from 1 | |
openshift-controller-manager |
default-scheduler |
controller-manager-6f75697dcf-p28rf |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6ff4f4c6f6 to 1 from 0 | |
openshift-service-ca |
multus |
service-ca-79bc6b8d76-xlhg9 |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-service-ca |
kubelet |
service-ca-79bc6b8d76-xlhg9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" already present on machine | |
openshift-service-ca |
kubelet |
service-ca-79bc6b8d76-xlhg9 |
Started |
Started container service-ca-controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Started |
Started container graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Created |
Created container: graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-nkvjk |
Created |
Created container: migrator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt | |
| (x2) | openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.35" |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-67dcd4998-wrdwm_99f6aa38-aa04-46dc-9f89-f5477ee55ab8 became leader | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/16")}, +Â "cluster-name": []any{string("sno-s8rnh")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "serviceServingCert": map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt | |
openshift-apiserver |
default-scheduler |
apiserver-c765cd67b-cvhxl |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-c765cd67b-cvhxl to master-0 | |
openshift-apiserver |
replicaset-controller |
apiserver-c765cd67b |
SuccessfulCreate |
Created pod: apiserver-c765cd67b-cvhxl | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2026-03-19 09:20:37 +0000 UTC AsExpected } {OperatorProgressing False 2026-03-19 09:20:37 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-03-19 09:20:37 +0000 UTC AsExpected }] | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-c765cd67b to 1 | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.35" |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.35"} {"feature-gates" ""}] | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.35" |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: status.versions changed from [{"operator" "4.18.35"} {"feature-gates" ""}] to [{"operator" "4.18.35"} {"feature-gates" "4.18.35"}] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.35" | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-79bc6b8d76-xlhg9_e1bb8b91-1946-40a7-bc40-edacb285d622 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f75697dcf |
SuccessfulDelete |
Deleted pod: controller-manager-6f75697dcf-p28rf | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.35"} {"csi-snapshot-controller" "4.18.35"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.35" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.35" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-64854d9cff-dzfgb |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-64854d9cff-dzfgb became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-58bdf45c89-nnbc4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-58559b7f6c |
SuccessfulCreate |
Created pod: route-controller-manager-58559b7f6c-j4rrt | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Started |
Started container snapshot-controller | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Created |
Created container: snapshot-controller | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-58559b7f6c-j4rrt |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24" in 2.368s (2.368s including waiting). Image size: 463705930 bytes. | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-65dbf9584 to 1 from 0 | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6ff4f4c6f6-zm9dj |
FailedScheduling |
skip schedule deleting pod: openshift-route-controller-manager/route-controller-manager-6ff4f4c6f6-zm9dj | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6f75697dcf to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6ff4f4c6f6 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6ff4f4c6f6-zm9dj | |
openshift-controller-manager |
default-scheduler |
controller-manager-6f75697dcf-p28rf |
FailedScheduling |
skip schedule deleting pod: openshift-controller-manager/controller-manager-6f75697dcf-p28rf | |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-5547669f67-dhd9c |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : configmap "etcd-serving-ca" not found |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6ff4f4c6f6 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-58559b7f6c to 1 from 0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-65dbf9584-tg7x7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-c765cd67b-cvhxl |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-65dbf9584 |
SuccessfulCreate |
Created pod: controller-manager-65dbf9584-tg7x7 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-c765cd67b-cvhxl |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : configmap "etcd-serving-ca" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
multus |
controller-manager-65dbf9584-tg7x7 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-65dbf9584-tg7x7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
| (x9) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMissing |
no observedConfig |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-58559b7f6c-j4rrt |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-58559b7f6c-j4rrt to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"v4-0-config-system-service-ca\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-65dbf9584-tg7x7 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-65dbf9584-tg7x7 to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
| (x4) | openshift-apiserver |
kubelet |
apiserver-c765cd67b-cvhxl |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : secret "etcd-client" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" already present on machine | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Started |
Started container openshift-config-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-bqqqq |
Created |
Created container: openshift-config-operator |
| (x4) | openshift-apiserver |
kubelet |
apiserver-c765cd67b-cvhxl |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
| (x5) | openshift-oauth-apiserver |
kubelet |
apiserver-5547669f67-dhd9c |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x5) | openshift-oauth-apiserver |
kubelet |
apiserver-5547669f67-dhd9c |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : secret "etcd-client" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"v4-0-config-system-service-ca\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"v4-0-config-system-service-ca\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-95bf4f4d-bqqqq_ff48d3b0-efc5-4106-a33f-0bd129a1007a became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
| (x39) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-apiserver |
default-scheduler |
apiserver-66c44d7ccf-z4ssv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-apiserver |
replicaset-controller |
apiserver-66c44d7ccf |
SuccessfulCreate |
Created pod: apiserver-66c44d7ccf-z4ssv | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-66c44d7ccf to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-c765cd67b to 0 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-apiserver |
replicaset-controller |
apiserver-c765cd67b |
SuccessfulDelete |
Deleted pod: apiserver-c765cd67b-cvhxl | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found" | |
openshift-controller-manager |
kubelet |
controller-manager-65dbf9584-tg7x7 |
Started |
Started container controller-manager | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5547669f67 to 0 from 1 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-65dbf9584-tg7x7 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-65dbf9584-tg7x7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" in 3.457s (3.457s including waiting). Image size: 558211175 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-65dbf9584-tg7x7 |
Created |
Created container: controller-manager | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7558b877c5 |
SuccessfulCreate |
Created pod: apiserver-7558b877c5-pb68b | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7558b877c5 to 1 from 0 | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7558b877c5-pb68b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-5547669f67 |
SuccessfulDelete |
Deleted pod: apiserver-5547669f67-dhd9c | |
openshift-apiserver |
default-scheduler |
apiserver-66c44d7ccf-z4ssv |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-66c44d7ccf-z4ssv to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
multus |
apiserver-66c44d7ccf-z4ssv |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-apiserver |
kubelet |
apiserver-66c44d7ccf-z4ssv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-image-registry |
multus |
cluster-image-registry-operator-5549dc66cb-dcmsc |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-dcmsc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89" | |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-6864dc98f7 to 1 | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/etcd-client -n openshift-kube-apiserver: secrets "etcd-client" already exists | |
openshift-operator-controller |
default-scheduler |
operator-controller-controller-manager-57777556ff-pn5gg |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-57777556ff-pn5gg to master-0 | |
| (x8) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nResourceSyncControllerDegraded: secrets \"etcd-client\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nResourceSyncControllerDegraded: secrets \"etcd-client\" already exists" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found" | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-6864dc98f7 to 1 | |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x8) | openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-marketplace |
multus |
marketplace-operator-89ccd998f-6qck2 |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7558b877c5-pb68b |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7558b877c5-pb68b to master-0 | |
| (x8) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x8) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-mc76b |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x8) | openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-57777556ff |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-57777556ff-pn5gg | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-57777556ff to 1 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
| (x8) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-mc76b |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-apiserver |
replicaset-controller |
apiserver-54cd8888b9 |
SuccessfulCreate |
Created pod: apiserver-54cd8888b9-q4ztg | |
openshift-ingress-operator |
multus |
ingress-operator-66b84d69b-pgdrx |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-6864dc98f7-7wdws |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-6864dc98f7-7wdws to master-0 | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-57777556ff-pn5gg |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-6864dc98f7 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-6864dc98f7-7wdws | |
openshift-operator-controller |
operator-controller-controller-manager-57777556ff-pn5gg_c694617a-de26-48d2-b4b4-823de82ec55a |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-57777556ff-pn5gg_c694617a-de26-48d2-b4b4-823de82ec55a became leader | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-7558b877c5-pb68b |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-58559b7f6c-j4rrt |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-6864dc98f7-7wdws |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-6864dc98f7-7wdws to master-0 | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-6864dc98f7 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-6864dc98f7-7wdws | |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-dns-operator |
multus |
dns-operator-9c5679d8f-fdxtp |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc" | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-54cd8888b9 to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-66c44d7ccf to 0 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-66c44d7ccf |
SuccessfulDelete |
Deleted pod: apiserver-66c44d7ccf-z4ssv | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Created |
Created container: kube-rbac-proxy | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7558b877c5 |
SuccessfulDelete |
Deleted pod: apiserver-7558b877c5-pb68b | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"v4-0-config-system-service-ca\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"v4-0-config-system-service-ca\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6fccf84fc5 |
SuccessfulCreate |
Created pod: apiserver-6fccf84fc5-rnmt2 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6fccf84fc5 to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7558b877c5 to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Started |
Started container kube-rbac-proxy | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-6fccf84fc5-rnmt2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
| (x14) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"v4-0-config-system-service-ca\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
| (x55) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-oauth-apiserver |
default-scheduler |
apiserver-6fccf84fc5-rnmt2 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6fccf84fc5-rnmt2 to master-0 | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml |
| (x4) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing |
| (x4) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver: client rate limiter Wait returned an error: context canceled | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdateFailed |
Failed to update ConfigMap/client-ca -n openshift-kube-apiserver: client rate limiter Wait returned an error: context canceled | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
| (x3) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7fcf878b4 to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node.",status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.35" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-65dbf9584 |
SuccessfulDelete |
Deleted pod: controller-manager-65dbf9584-tg7x7 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" in 10.525s (10.525s including waiting). Image size: 517999161 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-65dbf9584 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-58559b7f6c to 0 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-controller-manager |
default-scheduler |
controller-manager-7fcf878b4-mjm86 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Created |
Created container: cluster-node-tuning-operator | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634" | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Started |
Started container kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-6fccf84fc5-rnmt2 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" in 9.83s (9.83s including waiting). Image size: 511227324 bytes. | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" already present on machine | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
Created |
Created container: cluster-version-operator | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
Started |
Started container cluster-version-operator | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_601c322a-3094-43a6-9707-379c2c24f32d became leader | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-catalogd |
multus |
catalogd-controller-manager-6864dc98f7-7wdws |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
| (x3) | openshift-apiserver |
default-scheduler |
apiserver-54cd8888b9-q4ztg |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" in 10.142s (10.142s including waiting). Image size: 458126937 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Started |
Started container cluster-node-tuning-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Created |
Created container: kube-apiserver-operator | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-5549dc66cb-dcmsc_699e9b72-1504-4031-bef3-47798bff8a52 became leader | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-tvm5p |
Started |
Started container kube-apiserver-operator | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc" in 9.827s (9.827s including waiting). Image size: 468265024 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Created |
Created container: dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Started |
Started container dns-operator | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7fcf878b4 |
SuccessfulCreate |
Created pod: controller-manager-7fcf878b4-mjm86 | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver |
kubelet |
apiserver-66c44d7ccf-z4ssv |
Started |
Started container fix-audit-permissions | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-686585f447 to 1 from 0 | |
openshift-apiserver |
kubelet |
apiserver-66c44d7ccf-z4ssv |
Created |
Created container: fix-audit-permissions | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Started |
Started container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" in 10.34s (10.34s including waiting). Image size: 677942383 bytes. | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-dcmsc |
Started |
Started container cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-dcmsc |
Created |
Created container: cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-dcmsc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89" in 10.143s (10.143s including waiting). Image size: 548752816 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-65dbf9584-tg7x7 |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-686585f447 |
SuccessfulCreate |
Created pod: route-controller-manager-686585f447-gm2z5 | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-wh9q6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" in 10.34s (10.34s including waiting). Image size: 677942383 bytes. | |
openshift-apiserver |
kubelet |
apiserver-66c44d7ccf-z4ssv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" in 11.127s (11.127s including waiting). Image size: 589386806 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-58559b7f6c |
SuccessfulDelete |
Deleted pod: route-controller-manager-58559b7f6c-j4rrt | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Started |
Started container kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
multus |
catalogd-controller-manager-6864dc98f7-7wdws |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-58559b7f6c-j4rrt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" | |
openshift-route-controller-manager |
multus |
route-controller-manager-58559b7f6c-j4rrt |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-fdxtp |
Started |
Started container kube-rbac-proxy | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Created |
Created container: manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Created |
Created container: manager | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-wh9q6_cd341ac6-f501-4c54-9a30-3c8e31045541 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-wh9q6_cd341ac6-f501-4c54-9a30-3c8e31045541 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Started |
Started container manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
multus |
controller-manager-7fcf878b4-mjm86 |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-wh9q6_cd341ac6-f501-4c54-9a30-3c8e31045541 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-wh9q6_cd341ac6-f501-4c54-9a30-3c8e31045541 became leader | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-7wdws_099328c8-e47e-4200-8fce-741cd8b9901c |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-7wdws_099328c8-e47e-4200-8fce-741cd8b9901c became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-8b68b9d9b-tvm5p_42fc6623-8fed-4046-8792-f6d1cb70d676 became leader | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Started |
Started container manager | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-controller-manager |
default-scheduler |
controller-manager-7fcf878b4-mjm86 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7fcf878b4-mjm86 to master-0 | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-7wdws_099328c8-e47e-4200-8fce-741cd8b9901c |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-7wdws_099328c8-e47e-4200-8fce-741cd8b9901c became leader | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-vkw4s | |
openshift-controller-manager |
kubelet |
controller-manager-7fcf878b4-mjm86 |
Started |
Started container controller-manager | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-dns |
default-scheduler |
dns-default-p88qq |
Scheduled |
Successfully assigned openshift-dns/dns-default-p88qq to master-0 | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-vkw4s |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-vkw4s to master-0 | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-vkw4s | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-vkw4s |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-vkw4s to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7fcf878b4-mjm86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7fcf878b4-mjm86 became leader | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-p88qq | |
openshift-controller-manager |
kubelet |
controller-manager-7fcf878b4-mjm86 |
Created |
Created container: controller-manager | |
openshift-ingress |
replicaset-controller |
router-default-7dcf5569b5 |
SuccessfulCreate |
Created pod: router-default-7dcf5569b5-4cst9 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-dns |
kubelet |
dns-default-p88qq |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
| (x3) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-686585f447-gm2z5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-ingress |
default-scheduler |
router-default-7dcf5569b5-4cst9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-7dcf5569b5 to 1 | |
openshift-apiserver |
default-scheduler |
apiserver-54cd8888b9-q4ztg |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-54cd8888b9-q4ztg to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
default-scheduler |
node-resolver-pmxm8 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-pmxm8 to master-0 | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-pmxm8 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-58559b7f6c-j4rrt_aa715d77-0f02-42b7-823f-79fa771676f0 became leader | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" already present on machine | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vkw4s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" already present on machine | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-dns |
multus |
dns-default-p88qq |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634" in 5.528s (5.528s including waiting). Image size: 505345991 bytes. | |
openshift-dns |
kubelet |
dns-default-p88qq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12" | |
openshift-apiserver |
multus |
apiserver-54cd8888b9-q4ztg |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vkw4s |
Created |
Created container: tuned | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vkw4s |
Started |
Started container tuned | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Created |
Created container: fix-audit-permissions | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Started |
Started container fix-audit-permissions | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vkw4s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-58559b7f6c-j4rrt |
Killing |
Stopping container route-controller-manager | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vkw4s |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vkw4s |
Started |
Started container tuned | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-dns |
kubelet |
node-resolver-pmxm8 |
Started |
Started container dns-node-resolver | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-58559b7f6c-j4rrt |
Started |
Started container route-controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
kubelet |
node-resolver-pmxm8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" already present on machine | |
openshift-dns |
kubelet |
node-resolver-pmxm8 |
Created |
Created container: dns-node-resolver | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-58559b7f6c-j4rrt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" in 5.546s (5.546s including waiting). Image size: 487096305 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-58559b7f6c-j4rrt |
Created |
Created container: route-controller-manager | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Started |
Started container oauth-apiserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Started |
Started container openshift-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6fccf84fc5-rnmt2 |
Created |
Created container: oauth-apiserver | |
openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-686585f447-gm2z5 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-686585f447-gm2z5 to master-0 | |
| (x53) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
multus |
route-controller-manager-686585f447-gm2z5 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/cluster-policy-controller-config has changed,required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.35" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.35"}] to [{"operator" "4.18.35"} {"oauth-apiserver" "4.18.35"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-dns |
kubelet |
dns-default-p88qq |
Started |
Started container dns | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
SuccessfulDelete |
Deleted pod: cluster-version-operator-56d8475767-sbhx2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-dns |
kubelet |
dns-default-p88qq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{   "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")},   "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},   "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},   ... // 6 identical entries   },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-686585f447-gm2z5 |
Started |
Started container route-controller-manager | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-dns |
kubelet |
dns-default-p88qq |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-686585f447-gm2z5 |
Created |
Created container: route-controller-manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-686585f447-gm2z5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed |
| (x2) | openshift-apiserver |
kubelet |
apiserver-54cd8888b9-q4ztg |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-sbhx2 |
Killing |
Stopping container cluster-version-operator | |
openshift-dns |
kubelet |
dns-default-p88qq |
Created |
Created container: dns | |
openshift-dns |
kubelet |
dns-default-p88qq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12" in 9.76s (9.76s including waiting). Image size: 484187929 bytes. | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdateFailed |
Failed to update ConfigMap/client-ca -n openshift-kube-controller-manager: Operation cannot be fulfilled on configmaps "client-ca": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well" | |
| (x16) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-56d8475767 to 0 from 1 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_601c322a-3094-43a6-9707-379c2c24f32d stopped leading | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Created |
Created container: network-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/webhook-authenticator -n openshift-kube-apiserver: secrets "webhook-authenticator" already exists | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-dns |
kubelet |
dns-default-p88qq |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-jxvxl |
Started |
Started container network-operator | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_81f07629-c234-45f6-bb2f-86844cf4e56e became leader | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-thkn2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" already present on machine | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-thkn2 |
Created |
Created container: cluster-version-operator | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-7d58488df-thkn2 |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-7d58488df-thkn2 to master-0 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-686585f447-gm2z5_082ead1c-d13b-4568-9d16-6789af191f48 became leader | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-7d58488df |
SuccessfulCreate |
Created pod: cluster-version-operator-7d58488df-thkn2 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-7d58488df to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nResourceSyncControllerDegraded: secrets \"webhook-authenticator\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nResourceSyncControllerDegraded: secrets \"webhook-authenticator\" already exists" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-thkn2 |
Started |
Started container cluster-version-operator | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
| (x2) | openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set route-controller-manager-6ff75bdd67 to 1 from 0 |
| (x6) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
| (x6) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6ff75bdd67-drxcb |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6ff75bdd67 |
SuccessfulCreate |
Created pod: route-controller-manager-6ff75bdd67-drxcb | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
| (x2) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-6f9655dc5d to 1 from 0 |
openshift-controller-manager |
replicaset-controller |
controller-manager-7fcf878b4 |
SuccessfulDelete |
Deleted pod: controller-manager-7fcf878b4-mjm86 | |
openshift-controller-manager |
kubelet |
controller-manager-7fcf878b4-mjm86 |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-686585f447 |
SuccessfulDelete |
Deleted pod: route-controller-manager-686585f447-gm2z5 | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-z8gbk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" already present on machine | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-z8gbk |
Started |
Started container authentication-operator |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-z8gbk |
Created |
Created container: authentication-operator |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_21947fb1-605d-46d0-9b36-3255a62fdf0e became leader | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f9655dc5d |
SuccessfulCreate |
Created pod: controller-manager-6f9655dc5d-8lp25 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-6f9655dc5d-8lp25 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5885bfd7f4-z8gbk_490f102a-321d-40d8-b2a0-590bdd98137e became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-route-controller-manager |
kubelet |
route-controller-manager-686585f447-gm2z5 |
Killing |
Stopping container route-controller-manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.",Available changed from False to True ("All is well") | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.35"}] to [{"operator" "4.18.35"} {"openshift-apiserver" "4.18.35"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.35" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6ff75bdd67-drxcb |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6ff75bdd67-drxcb to master-0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-6f9655dc5d-8lp25 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6f9655dc5d-8lp25 to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-controller-manager |
multus |
controller-manager-6f9655dc5d-8lp25 |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-6f9655dc5d-8lp25 |
Started |
Started container controller-manager | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-route-controller-manager |
kubelet |
route-controller-manager-6ff75bdd67-drxcb |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6ff75bdd67-drxcb |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6ff75bdd67-drxcb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-6ff75bdd67-drxcb |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-controller-manager |
kubelet |
controller-manager-6f9655dc5d-8lp25 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-6f9655dc5d-8lp25 |
Created |
Created container: controller-manager | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6f9655dc5d-8lp25 became leader | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6ff75bdd67-drxcb_7648e485-9220-4934-8c5c-d2e5e5ac2806 became leader | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/template.openshift.io/v1: 401" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/cluster-policy-controller-config has changed,required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
default-scheduler |
multus-admission-controller-58c9f8fc64-cr9pg |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-58c9f8fc64-cr9pg to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-58c9f8fc64 |
SuccessfulCreate |
Created pod: multus-admission-controller-58c9f8fc64-cr9pg | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-58c9f8fc64 to 1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-58c9f8fc64-cr9pg |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-58c9f8fc64-cr9pg to master-0 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-58c9f8fc64 to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-58c9f8fc64 |
SuccessfulCreate |
Created pod: multus-admission-controller-58c9f8fc64-cr9pg | |
openshift-multus |
multus |
multus-admission-controller-58c9f8fc64-cr9pg |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" | |
openshift-multus |
multus |
multus-admission-controller-58c9f8fc64-cr9pg |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" in 1.496s (1.496s including waiting). Image size: 456576198 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-cr9pg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" in 1.496s (1.496s including waiting). Image size: 456576198 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5dbbb8b86f-mc76b | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5dbbb8b86f-mc76b | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5dbbb8b86f to 0 from 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5dbbb8b86f to 0 from 1 | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" already present on machine | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Started |
Started container openshift-controller-manager-operator |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-6f97756bc8 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-6f97756bc8-l8kmn | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-6f97756bc8 to 1 | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-l8kmn to master-0 | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-6f97756bc8 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-6f97756bc8-l8kmn | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-6f97756bc8 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-l8kmn to master-0 | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-xhzf9 |
Created |
Created container: openshift-controller-manager-operator |
openshift-machine-api |
multus |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-l8kmn_eb9f6d1d-df5c-4868-8216-17caa9e3fadc |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-6f97756bc8-l8kmn_eb9f6d1d-df5c-4868-8216-17caa9e3fadc became leader | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-6cb57bb5db-qkbqh |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-6cb57bb5db-qkbqh to master-0 | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" in 2.483s (2.483s including waiting). Image size: 470681292 bytes. | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" in 2.483s (2.483s including waiting). Image size: 470681292 bytes. | |
| (x26) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-6cb57bb5db |
SuccessfulCreate |
Created pod: machine-approver-6cb57bb5db-qkbqh | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-l8kmn_eb9f6d1d-df5c-4868-8216-17caa9e3fadc |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-6f97756bc8-l8kmn_eb9f6d1d-df5c-4868-8216-17caa9e3fadc became leader | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-6cb57bb5db to 1 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdd28dfe7132e19af9f013f72cf120d970bc31b6b74693af262f8d2e82a096e1" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdd28dfe7132e19af9f013f72cf120d970bc31b6b74693af262f8d2e82a096e1" in 2.591s (2.591s including waiting). Image size: 467235741 bytes. | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Created |
Created container: machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Started |
Started container machine-approver-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Created |
Created container: approver | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Started |
Started container kube-controller-manager-operator |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Created |
Created container: cluster-olm-operator |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-rcnnp |
Created |
Created container: kube-controller-manager-operator |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Started |
Started container approver | |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Started |
Started container cluster-olm-operator |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-wrdwm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11" already present on machine | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-wjbt2_openshift-operator-lifecycle-manager_8aa0f17a-287e-4a19-9a59-4913e7707071_0(3557a1b45dd90816953dc552eea9a193dd5b6c16976411f913644e5838d29b1c): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-wjbt2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3557a1b45dd90816953dc552eea9a193dd5b6c16976411f913644e5838d29b1c" Netns:"/var/run/netns/e08277fa-1304-4e4f-99c1-ee3ce5f3905f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-wjbt2;K8S_POD_INFRA_CONTAINER_ID=3557a1b45dd90816953dc552eea9a193dd5b6c16976411f913644e5838d29b1c;K8S_POD_UID=8aa0f17a-287e-4a19-9a59-4913e7707071" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-wjbt2] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-wjbt2/8aa0f17a-287e-4a19-9a59-4913e7707071]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-wjbt2 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-wjbt2 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-wjbt2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-j92kd_openshift-operator-lifecycle-manager_208939f5-8fca-4fd5-b0c6-43484b7d1e30_0(5e6530db9d30bd22d87f35871d24fe1e9b352cc3dd09cce4a1e86c5991a24377): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-j92kd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e6530db9d30bd22d87f35871d24fe1e9b352cc3dd09cce4a1e86c5991a24377" Netns:"/var/run/netns/970b68fd-eea9-46ff-b513-0e04d17e2d4f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-j92kd;K8S_POD_INFRA_CONTAINER_ID=5e6530db9d30bd22d87f35871d24fe1e9b352cc3dd09cce4a1e86c5991a24377;K8S_POD_UID=208939f5-8fca-4fd5-b0c6-43484b7d1e30" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-j92kd] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-j92kd/208939f5-8fca-4fd5-b0c6-43484b7d1e30]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-j92kd in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-j92kd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-j92kd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-p76jz_openshift-multus_4256d841-23cb-4756-b827-f44ee6e54def_0(af3a06bf93cd551ce9021bf4a28c6006f61769f4b8b90e164386ba252949d5a7): error adding pod openshift-multus_network-metrics-daemon-p76jz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af3a06bf93cd551ce9021bf4a28c6006f61769f4b8b90e164386ba252949d5a7" Netns:"/var/run/netns/c0771a82-ab35-44c9-a61c-59fe770379db" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-p76jz;K8S_POD_INFRA_CONTAINER_ID=af3a06bf93cd551ce9021bf4a28c6006f61769f4b8b90e164386ba252949d5a7;K8S_POD_UID=4256d841-23cb-4756-b827-f44ee6e54def" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-p76jz] networking: Multus: [openshift-multus/network-metrics-daemon-p76jz/4256d841-23cb-4756-b827-f44ee6e54def]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-p76jz in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-p76jz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-p76jz?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-z2869_openshift-monitoring_7ad3ef11-90df-40b1-acbf-ed9b0c708ddb_0(60d35db2459cccd128085c6639fe78473206722d81851530fcbceb862473a1ef): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-z2869 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"60d35db2459cccd128085c6639fe78473206722d81851530fcbceb862473a1ef" Netns:"/var/run/netns/ad8edbac-3572-460e-902e-aab48e82c6d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-z2869;K8S_POD_INFRA_CONTAINER_ID=60d35db2459cccd128085c6639fe78473206722d81851530fcbceb862473a1ef;K8S_POD_UID=7ad3ef11-90df-40b1-acbf-ed9b0c708ddb" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-z2869] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-z2869/7ad3ef11-90df-40b1-acbf-ed9b0c708ddb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-z2869 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-z2869 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-z2869?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-z2869_openshift-monitoring_7ad3ef11-90df-40b1-acbf-ed9b0c708ddb_0(60d35db2459cccd128085c6639fe78473206722d81851530fcbceb862473a1ef): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-z2869 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"60d35db2459cccd128085c6639fe78473206722d81851530fcbceb862473a1ef" Netns:"/var/run/netns/ad8edbac-3572-460e-902e-aab48e82c6d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-z2869;K8S_POD_INFRA_CONTAINER_ID=60d35db2459cccd128085c6639fe78473206722d81851530fcbceb862473a1ef;K8S_POD_UID=7ad3ef11-90df-40b1-acbf-ed9b0c708ddb" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-z2869] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-z2869/7ad3ef11-90df-40b1-acbf-ed9b0c708ddb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-z2869 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-z2869 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-z2869?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-p76jz_openshift-multus_4256d841-23cb-4756-b827-f44ee6e54def_0(af3a06bf93cd551ce9021bf4a28c6006f61769f4b8b90e164386ba252949d5a7): error adding pod openshift-multus_network-metrics-daemon-p76jz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af3a06bf93cd551ce9021bf4a28c6006f61769f4b8b90e164386ba252949d5a7" Netns:"/var/run/netns/c0771a82-ab35-44c9-a61c-59fe770379db" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-p76jz;K8S_POD_INFRA_CONTAINER_ID=af3a06bf93cd551ce9021bf4a28c6006f61769f4b8b90e164386ba252949d5a7;K8S_POD_UID=4256d841-23cb-4756-b827-f44ee6e54def" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-p76jz] networking: Multus: [openshift-multus/network-metrics-daemon-p76jz/4256d841-23cb-4756-b827-f44ee6e54def]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-p76jz in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-p76jz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-p76jz?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-gltb5_openshift-operator-lifecycle-manager_1f2148fe-f9f6-47da-894c-b88dae360ebe_0(2112b2c9adb2d0fb0ed222edddd7adc437cbd771b174dd85e078521ab58ddc3a): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-gltb5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2112b2c9adb2d0fb0ed222edddd7adc437cbd771b174dd85e078521ab58ddc3a" Netns:"/var/run/netns/712c2c85-ce5d-4530-8545-b2e9c4a3e393" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-gltb5;K8S_POD_INFRA_CONTAINER_ID=2112b2c9adb2d0fb0ed222edddd7adc437cbd771b174dd85e078521ab58ddc3a;K8S_POD_UID=1f2148fe-f9f6-47da-894c-b88dae360ebe" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-gltb5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-gltb5/1f2148fe-f9f6-47da-894c-b88dae360ebe]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-gltb5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-gltb5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-gltb5?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
ProbeError |
Liveness probe error: Get "https://10.128.0.5:8443/healthz": dial tcp 10.128.0.5:8443: connect: connection refused body: |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.5:8443/healthz": dial tcp 10.128.0.5:8443: connect: connection refused |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Created |
Created container: service-ca-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Started |
Started container service-ca-operator |
openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-wwkqz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" already present on machine | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Started |
Started container kube-scheduler-operator-container |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-6fzwb |
Created |
Created container: kube-scheduler-operator-container |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" already present on machine | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" already present on machine |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" already present on machine | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" already present on machine |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Created |
Created container: ingress-operator |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Created |
Created container: kube-storage-version-migrator-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Created |
Created container: openshift-apiserver-operator |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" | |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-96qpx |
Started |
Started container openshift-apiserver-operator |
| (x2) | openshift-monitoring |
multus |
cluster-monitoring-operator-58845fbb57-z2869 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes |
| (x2) | openshift-multus |
multus |
network-metrics-daemon-p76jz |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Started |
Started container marketplace-operator |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Created |
Created container: marketplace-operator |
| (x2) | openshift-operator-lifecycle-manager |
multus |
package-server-manager-7b95f86987-gltb5 |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
| (x2) | openshift-multus |
multus |
network-metrics-daemon-p76jz |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" | |
| (x2) | openshift-operator-lifecycle-manager |
multus |
olm-operator-5c9796789-wjbt2 |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Created |
Created container: etcd-operator |
| (x2) | openshift-operator-lifecycle-manager |
multus |
catalog-operator-68f85b4d6c-j92kd |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-ct498 |
Started |
Started container etcd-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-monitoring |
multus |
cluster-monitoring-operator-58845fbb57-z2869 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" | |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Started |
Started container ingress-operator |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw |
Started |
Started container kube-storage-version-migrator-operator |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.40:8081/healthz": dial tcp 10.128.0.40:8081: connect: connection refused | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
ProbeError |
Liveness probe error: Get "http://10.128.0.40:8081/healthz": dial tcp 10.128.0.40:8081: connect: connection refused body: | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
ProbeError |
Readiness probe error: Get "http://10.128.0.40:8081/readyz": dial tcp 10.128.0.40:8081: connect: connection refused body: | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.40:8081/readyz": dial tcp 10.128.0.40:8081: connect: connection refused | |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Started |
Started container manager |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Created |
Created container: manager |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-d65958b8-96qpx_6fd803de-ec86-4fa6-baee-158a310d5948 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" already present on machine |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-6bb5bfb6fd-hn7cw_c79fab5e-55d7-4ae1-9904-12465eebd80f became leader | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Started |
Started container cluster-monitoring-operator | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Created |
Created container: network-metrics-daemon | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" in 4.001s (4.001s including waiting). Image size: 484450894 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" in 4.001s (4.001s including waiting). Image size: 484450894 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-z2869 |
Created |
Created container: cluster-monitoring-operator | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Created |
Created container: network-metrics-daemon | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" in 3.97s (3.97s including waiting). Image size: 448828620 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" in 3.97s (3.97s including waiting). Image size: 448828620 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
master-0_58e0f9f8-6fce-445e-bc8a-66787edacad8 |
cluster-machine-approver-leader |
LeaderElection |
master-0_58e0f9f8-6fce-445e-bc8a-66787edacad8 became leader | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-p76jz |
Started |
Started container network-metrics-daemon | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0319 09:21:23.206137 1 cmd.go:413] Getting controller reference for node master-0 I0319 09:21:23.219311 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0319 09:21:23.219376 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0319 09:21:23.219389 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0319 09:21:23.299806 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0319 09:21:53.300687 1 cmd.go:524] Getting installer pods for node master-0 F0319 09:22:07.301897 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config-2 -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed" | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_12fd25ed-126b-461d-8fee-d8de3e10e21d became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-h6z5t" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-h6z5t" has been approved | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-mj5nr" is created for OpenShiftMonitoringClientCertRequester | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "All is well" | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:23.206137 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219311 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219376 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.219389 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.299806 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:53.300687 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:07.301897 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:23.206137 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219311 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219376 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.219389 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.299806 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:53.300687 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:07.301897 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-h6z5t" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-mj5nr" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:23.206137 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219311 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219376 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.219389 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.299806 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:53.300687 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:07.301897 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:23.206137 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219311 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:23.219376 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.219389 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:23.299806 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:53.300687 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:07.301897 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: ",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.user.openshift.io)]\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" in 15.534s (15.534s including waiting). Image size: 862657321 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-mj5nr" has been approved | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" in 16.539s (16.539s including waiting). Image size: 862657321 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
Created |
Created container: olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-wjbt2 |
Started |
Started container olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" in 17.87s (17.87s including waiting). Image size: 862657321 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
Started |
Started container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-j92kd |
Created |
Created container: catalog-operator | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.user.openshift.io)]\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Started |
Started container package-server-manager | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-gltb5 |
Created |
Created container: package-server-manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_ebc1c92b-c779-4c7c-b13e-359896b2ce4a became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-operator-lifecycle-manager |
package-server-manager-7b95f86987-gltb5_95f0c4ad-b128-4844-8ccd-1b755236452c |
packageserver-controller-lock |
LeaderElection |
package-server-manager-7b95f86987-gltb5_95f0c4ad-b128-4844-8ccd-1b755236452c became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-6fbb6cf6f9 |
SuccessfulCreate |
Created pod: machine-api-operator-6fbb6cf6f9-qx75g | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-6fbb6cf6f9 |
SuccessfulCreate |
Created pod: machine-api-operator-6fbb6cf6f9-qx75g | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-6f69995874 to 1 | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Created |
Created container: extract-utilities | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Started |
Started container extract-utilities | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-69c6b55594 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-6f69995874-nm9nx | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-69c6b55594 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-69c6b55594-l2279 | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-7d87854d6 |
SuccessfulCreate |
Created pod: cluster-storage-operator-7d87854d6-g96tv | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-69c6b55594 to 1 | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-85f7577d78 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-85f7577d78 |
SuccessfulCreate |
Created pod: cluster-samples-operator-85f7577d78-mfxr5 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-69c6b55594 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-69c6b55594-l2279 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-65cccc5599 |
SuccessfulCreate |
Created pod: packageserver-65cccc5599-mhl2j | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-65cccc5599 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
multus |
community-operators-wqngb |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-7d87854d6 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-866dc4744 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-866dc4744-hzrg4 | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-84d549f6d5 |
SuccessfulCreate |
Created pod: machine-config-operator-84d549f6d5-fdwf5 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-84d549f6d5 to 1 | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-744f9dbf77 to 1 | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-744f9dbf77 |
SuccessfulCreate |
Created pod: cloud-credential-operator-744f9dbf77-s7ts2 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-866dc4744 to 1 | |
openshift-marketplace |
multus |
certified-operators-tkx45 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-6fbb6cf6f9 to 1 | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-866dc4744 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-866dc4744-hzrg4 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-866dc4744 to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-7559f7c68c |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-6fbb6cf6f9 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-insights |
replicaset-controller |
insights-operator-68bf6ff9d6 |
SuccessfulCreate |
Created pod: insights-operator-68bf6ff9d6-wshz8 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-7559f7c68c to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-68bf6ff9d6 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Created |
Created container: extract-utilities | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-6f69995874-nm9nx | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-6f69995874 to 1 | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-744f9dbf77-s7ts2 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
multus |
machine-api-operator-6fbb6cf6f9-qx75g |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" | |
openshift-machine-api |
multus |
cluster-baremetal-operator-6f69995874-nm9nx |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-6fbb6cf6f9-qx75g |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-insights |
multus |
insights-operator-68bf6ff9d6-wshz8 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-866dc4744-hzrg4 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-machine-api |
multus |
cluster-baremetal-operator-6f69995874-nm9nx |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-866dc4744-hzrg4 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
Started |
Started container machine-config-operator | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
Created |
Created container: machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Started |
Started container extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" | |
openshift-machine-config-operator |
multus |
machine-config-operator-84d549f6d5-fdwf5 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
multus |
redhat-marketplace-wzz6n |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-operator-lifecycle-manager |
multus |
packageserver-65cccc5599-mhl2j |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-65cccc5599-mhl2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-65cccc5599-mhl2j |
Created |
Created container: packageserver | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-65cccc5599-mhl2j |
Started |
Started container packageserver | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-85f7577d78-mfxr5 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-zpvpd |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-g96tv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310" | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-7d87854d6-g96tv |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f2c59d19eb73ad5c0f93b0a63003c1885f5297959c9c45b401d1a74aea6e76" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-wshz8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Created |
Created container: extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-hgc52 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Killing |
Stopping container machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6cb57bb5db-qkbqh |
Killing |
Stopping container kube-rbac-proxy | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-6cb57bb5db |
SuccessfulDelete |
Deleted pod: machine-approver-6cb57bb5db-qkbqh | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-6cb57bb5db to 0 from 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-7559f7c68c to 0 from 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-7559f7c68c |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-wshz8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" in 39.801s (39.801s including waiting). Image size: 504662731 bytes. | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" in 39.514s (39.514s including waiting). Image size: 456375453 bytes. | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" in 39.514s (39.514s including waiting). Image size: 456375453 bytes. | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" in 41.897s (41.897s including waiting). Image size: 455417803 bytes. | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-5c6485487f |
SuccessfulCreate |
Created pod: machine-approver-5c6485487f-cscz5 | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-2-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-5c6485487f to 1 | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-etcd |
kubelet |
etcd-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:9980/readyz": context deadline exceeded body: | |
openshift-etcd |
kubelet |
etcd-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:9980/readyz": context deadline exceeded | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused body: | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused body: | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-65cccc5599-mhl2j |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-65cccc5599-mhl2j |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
FailedMount |
MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-fdwf5 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-g96tv |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-g96tv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-g96tv |
Created |
Created container: cluster-storage-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f2c59d19eb73ad5c0f93b0a63003c1885f5297959c9c45b401d1a74aea6e76" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
Started |
Started container machine-config-daemon | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Started |
Started container cloud-credential-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-g96tv |
Started |
Started container cluster-storage-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
Created |
Created container: machine-config-daemon | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-s7ts2 |
Created |
Created container: cloud-credential-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_4ad871ce-5e77-428b-9840-23fadc3dd07e became leader | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-hgc52 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Started |
Started container cluster-samples-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 4.456s (4.456s including waiting). Image size: 1231028434 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Created |
Created container: extract-content | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Started |
Started container cluster-samples-operator-watch | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 4.533s (4.533s including waiting). Image size: 1746376668 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 4.571s (4.571s including waiting). Image size: 1252053726 bytes. | |
| (x5) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" not registered |
| (x5) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" not registered |
| (x5) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : object "openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" not registered |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-mfxr5 |
Created |
Created container: cluster-samples-operator-watch | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 3.167s (3.167s including waiting). Image size: 1224180940 bytes. | |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-qrrhp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-w6qs5" : [object "openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" not registered] |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Created |
Created container: registry-server | |
| (x2) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
BackOff |
Back-off restarting failed container cluster-autoscaler-operator in pod cluster-autoscaler-operator-866dc4744-hzrg4_openshift-machine-api(d32541c9-eef6-417c-9f5a-a7392dc70aa0) |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 749ms (749ms including waiting). Image size: 918289953 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-wzz6n |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-tkx45 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 762ms (762ms including waiting). Image size: 918289953 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
| (x2) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
BackOff |
Back-off restarting failed container cluster-autoscaler-operator in pod cluster-autoscaler-operator-866dc4744-hzrg4_openshift-machine-api(d32541c9-eef6-417c-9f5a-a7392dc70aa0) |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 438ms (438ms including waiting). Image size: 918289953 bytes. | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 699ms (699ms including waiting). Image size: 918289953 bytes. | |
openshift-network-node-identity |
master-0_9ed647c0-aaaf-4910-808d-35396789748d |
ovnkube-identity |
LeaderElection |
master-0_9ed647c0-aaaf-4910-808d-35396789748d became leader | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-wqngb |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-zpvpd |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-ingress |
default-scheduler |
router-default-7dcf5569b5-4cst9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-b4bf74f6-wqvfk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_1aa7e365-5097-44f1-b52f-123f1beb44f8 became leader | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-7dff898856 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-7dff898856-rz5nt | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-7dff898856 to 1 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_f630a9b9-27af-4dc7-b42d-cc960f7281ef became leader | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rz5nt to master-0 | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-7d87854d6-g96tv_f1fe5b22-e34f-4c8f-af4f-1646cb8ed752 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
openshift-cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-cluster-storage-operator |
cluster-storage-operator |
openshift-cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
openshift-cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-cloud-controller-manager-operator |
master-0_c2e205b9-9cf3-45a6-bbeb-a6a418b3c82a |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_c2e205b9-9cf3-45a6-bbeb-a6a418b3c82a became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
openshift-cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-cloud-controller-manager-operator |
master-0_61fe534f-7784-4ec5-bead-4366c3a2acd1 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_61fe534f-7784-4ec5-bead-4366c3a2acd1 became leader | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
openshift-cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.35" |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Started |
Started container kube-rbac-proxy | |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" already present on machine |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" already present on machine |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Created |
Created container: cluster-autoscaler-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Created |
Created container: cluster-autoscaler-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Started |
Started container cluster-autoscaler-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-hzrg4 |
Started |
Started container cluster-autoscaler-operator |
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-hzrg4_14a7b93d-7325-44fd-8790-0eb04c540c48 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-866dc4744-hzrg4_14a7b93d-7325-44fd-8790-0eb04c540c48 became leader | |
openshift-cluster-machine-approver |
master-0_544ea5ef-6996-4da7-93e6-0689133e423f |
cluster-machine-approver-leader |
LeaderElection |
master-0_544ea5ef-6996-4da7-93e6-0689133e423f became leader | |
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-hzrg4_14a7b93d-7325-44fd-8790-0eb04c540c48 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-866dc4744-hzrg4_14a7b93d-7325-44fd-8790-0eb04c540c48 became leader | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Created |
Created container: ingress-operator | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Started |
Started container ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-pgdrx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" already present on machine | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-nm9nx_d5a28a3a-e39a-486d-97d9-bef93084a288 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-nm9nx_d5a28a3a-e39a-486d-97d9-bef93084a288 became leader | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-nm9nx_d5a28a3a-e39a-486d-97d9-bef93084a288 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-nm9nx_d5a28a3a-e39a-486d-97d9-bef93084a288 became leader | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.35 | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
default-scheduler |
console-operator-76b6568d85-grltt |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-76b6568d85-grltt to master-0 | |
openshift-console-operator |
replicaset-controller |
console-operator-76b6568d85 |
SuccessfulCreate |
Created pod: console-operator-76b6568d85-grltt | |
openshift-console-operator |
multus |
console-operator-76b6568d85-grltt |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-76b6568d85 to 1 | |
openshift-console-operator |
kubelet |
console-operator-76b6568d85-grltt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8" | |
openshift-console-operator |
kubelet |
console-operator-76b6568d85-grltt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8" in 2.253s (2.253s including waiting). Image size: 512235769 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.35"}] to [{"raw-internal" "4.18.35"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.35"}] | |
| (x14) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14" |
| (x14) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.35" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-8c94f4649-xhzf9_af772a26-8d39-46e9-a44e-ba0857197892 became leader | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-58fff6b545 to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6." to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7." | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f758fb97d |
SuccessfulCreate |
Created pod: route-controller-manager-7f758fb97d-qmbkd | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f9655dc5d |
SuccessfulDelete |
Deleted pod: controller-manager-6f9655dc5d-8lp25 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7f758fb97d to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-58fff6b545 |
SuccessfulCreate |
Created pod: controller-manager-58fff6b545-fvbrw | |
openshift-controller-manager |
default-scheduler |
controller-manager-58fff6b545-fvbrw |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6ff75bdd67 to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-6f9655dc5d-8lp25 |
Killing |
Stopping container controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6ff75bdd67-drxcb |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6ff75bdd67 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6ff75bdd67-drxcb | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6f9655dc5d to 0 from 1 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-7f758fb97d-qmbkd |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-7f758fb97d-qmbkd |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7f758fb97d-qmbkd to master-0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-58fff6b545-fvbrw |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-58fff6b545-fvbrw to master-0 | |
openshift-controller-manager |
multus |
controller-manager-58fff6b545-fvbrw |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-58fff6b545-fvbrw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f758fb97d-qmbkd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bcv9p" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f758fb97d-qmbkd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f758fb97d-qmbkd |
Started |
Started container route-controller-manager | |
| (x4) | openshift-console-operator |
kubelet |
console-operator-76b6568d85-grltt |
Created |
Created container: console-operator |
openshift-controller-manager |
kubelet |
controller-manager-58fff6b545-fvbrw |
Created |
Created container: controller-manager | |
openshift-route-controller-manager |
multus |
route-controller-manager-7f758fb97d-qmbkd |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f758fb97d-qmbkd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f758fb97d-qmbkd |
Created |
Created container: route-controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-58fff6b545-fvbrw |
Started |
Started container controller-manager | |
| (x4) | openshift-console-operator |
kubelet |
console-operator-76b6568d85-grltt |
Started |
Started container console-operator |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-authentication\": dial tcp 172.30.0.1:443: connect: connection refused\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication\": dial tcp 172.30.0.1:443: connect: connection refused\nOpenshiftAuthenticationStaticResourcesDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"10577\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 17, 17, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002727c68), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Progressing changed from False to True (""),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"10577\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 17, 17, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002727c68), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-7f758fb97d-qmbkd_8b00e1d7-408d-434d-9d0f-4e416328e5b4 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-operator-controller |
operator-controller-controller-manager-57777556ff-pn5gg_743f9a38-5e19-4787-a08c-b5c9971c955d |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-57777556ff-pn5gg_743f9a38-5e19-4787-a08c-b5c9971c955d became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
default-scheduler |
machine-config-controller-b4f87c5b9-ljq8q |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-b4f87c5b9-ljq8q to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-b4f87c5b9 |
SuccessfulCreate |
Created pod: machine-config-controller-b4f87c5b9-ljq8q | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-b4f87c5b9 to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-ljq8q |
Started |
Started container machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-ljq8q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
multus |
machine-config-controller-b4f87c5b9-ljq8q |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-ljq8q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-ljq8q |
Created |
Created container: machine-config-controller | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-l2279 to master-0 | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-b4bf74f6-wqvfk |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-b4bf74f6-wqvfk to master-0 | |
openshift-ingress-canary |
default-scheduler |
ingress-canary-gmjrw |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-gmjrw to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-gmjrw | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-6mbkc |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-6mbkc to master-0 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-6mbkc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-6mbkc | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-ljq8q |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-l2279 to master-0 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-ljq8q |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-4cst9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777" | |
openshift-ingress |
default-scheduler |
router-default-7dcf5569b5-4cst9 |
Scheduled |
Successfully assigned openshift-ingress/router-default-7dcf5569b5-4cst9 to master-0 | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-6mbkc | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-6mbkc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-6mbkc |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-6mbkc to master-0 | |
openshift-network-diagnostics |
multus |
network-check-source-b4bf74f6-wqvfk |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-gmjrw |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-gmjrw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-b4bf74f6-wqvfk |
Started |
Started container check-endpoints | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-6mbkc |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-6mbkc |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ingress-canary |
multus |
ingress-canary-gmjrw |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-gmjrw |
Started |
Started container serve-healthcheck-canary | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-6mbkc |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-6mbkc |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-network-diagnostics |
kubelet |
network-check-source-b4bf74f6-wqvfk |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-b4bf74f6-wqvfk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
| (x3) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.35} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015}] |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-4cst9 |
Started |
Started container router | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" in 3.07s (3.07s including waiting). Image size: 444573129 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-4cst9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777" in 3.566s (3.566s including waiting). Image size: 487159945 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" in 3.07s (3.07s including waiting). Image size: 444573129 bytes. | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-4cst9 |
Created |
Created container: router | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-l2279 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
| (x3) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Unable to apply 4.18.35: deployment.apps "machine-config-controller" not found |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-6c8df6d4b |
SuccessfulCreate |
Created pod: prometheus-operator-6c8df6d4b-6xvjm | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-6c8df6d4b to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-6c8df6d4b |
SuccessfulCreate |
Created pod: prometheus-operator-6c8df6d4b-6xvjm | |
openshift-monitoring |
default-scheduler |
prometheus-operator-6c8df6d4b-6xvjm |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-6c8df6d4b-6xvjm to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-6c8df6d4b to 1 | |
openshift-monitoring |
default-scheduler |
prometheus-operator-6c8df6d4b-6xvjm |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-6c8df6d4b-6xvjm to master-0 | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-967b7967b to 1 | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-monitoring |
multus |
prometheus-operator-6c8df6d4b-6xvjm |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-6c8df6d4b-6xvjm |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-authentication |
default-scheduler |
oauth-openshift-967b7967b-mb725 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-967b7967b-mb725 to master-0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-967b7967b |
SuccessfulCreate |
Created pod: oauth-openshift-967b7967b-mb725 | |
openshift-authentication |
multus |
oauth-openshift-967b7967b-mb725 |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-authentication\": dial tcp 172.30.0.1:443: connect: connection refused\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication\": dial tcp 172.30.0.1:443: connect: connection refused\nOpenshiftAuthenticationStaticResourcesDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"10577\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 17, 17, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002727c68), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"10577\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 17, 17, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002727c68), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Started |
Started container kube-rbac-proxy | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" in 2.976s (2.976s including waiting). Image size: 481463651 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-967b7967b |
SuccessfulDelete |
Deleted pod: oauth-openshift-967b7967b-mb725 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-848cd9b885 |
SuccessfulCreate |
Created pod: oauth-openshift-848cd9b885-hcbh9 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-848cd9b885 to 1 from 0 | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Created |
Created container: kube-rbac-proxy | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-967b7967b to 0 from 1 | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" in 5.105s (5.105s including waiting). Image size: 461569068 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
Started |
Started container oauth-openshift | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" in 5.105s (5.105s including waiting). Image size: 461569068 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-6xvjm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
Killing |
Stopping container oauth-openshift | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5dc6c74576 |
SuccessfulCreate |
Created pod: openshift-state-metrics-5dc6c74576-gh4px | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-fxzb9 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7bbc969446 to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7bbc969446 |
SuccessfulCreate |
Created pod: kube-state-metrics-7bbc969446-vjbnk | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" | |
openshift-monitoring |
default-scheduler |
node-exporter-fxzb9 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-fxzb9 to master-0 | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-7bbc969446-vjbnk |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7bbc969446-vjbnk to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5dc6c74576 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7bbc969446 to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7bbc969446 |
SuccessfulCreate |
Created pod: kube-state-metrics-7bbc969446-vjbnk | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-5dc6c74576-gh4px |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5dc6c74576-gh4px to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreateFailed |
Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-fxzb9 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
node-exporter-fxzb9 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-fxzb9 to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-5dc6c74576-gh4px |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5dc6c74576-gh4px to master-0 | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-7bbc969446-vjbnk |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7bbc969446-vjbnk to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreateFailed |
Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5dc6c74576 |
SuccessfulCreate |
Created pod: openshift-state-metrics-5dc6c74576-gh4px | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5dc6c74576 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
multus |
openshift-state-metrics-5dc6c74576-gh4px |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
kube-state-metrics-7bbc969446-vjbnk |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
multus |
openshift-state-metrics-5dc6c74576-gh4px |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-xhcnv | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-xhcnv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-xhcnv |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-xhcnv to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
multus |
kube-state-metrics-7bbc969446-vjbnk |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-17lh7pj6890g7 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-xhcnv |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-xhcnv |
Created |
Created container: machine-config-server | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" in 1.486s (1.486s including waiting). Image size: 417688124 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Created |
Created container: init-textfile | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" in 1.486s (1.486s including waiting). Image size: 417688124 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-17lh7pj6890g7 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-717a2b61c64c26e1bd913f89ee0b9c6f successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-465830e34ccb2e8ab1b806756f1a574a successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Created |
Created container: kube-rbac-proxy | |
| (x11) | openshift-console-operator |
kubelet |
console-operator-76b6568d85-grltt |
BackOff |
Back-off restarting failed container console-operator in pod console-operator-76b6568d85-grltt_openshift-console-operator(269465d8-91d6-40d7-bfde-3eff9b93c1cf) |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-fxzb9 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
| (x5) | openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/oauth-openshift -n openshift-config-managed: configmaps "oauth-openshift" already exists |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-7d7bcd498 to 1 | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-5e4c5dbc6c5ab0df96aa59d42e08702b successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-7d7bcd498 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
replicaset-controller |
thanos-querier-7d7bcd498 |
SuccessfulCreate |
Created pod: thanos-querier-7d7bcd498-w2pfb | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
replicaset-controller |
thanos-querier-7d7bcd498 |
SuccessfulCreate |
Created pod: thanos-querier-7d7bcd498-w2pfb | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nResourceSyncControllerDegraded: configmaps \"oauth-openshift\" already exists" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
default-scheduler |
thanos-querier-7d7bcd498-w2pfb |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-7d7bcd498-w2pfb to master-0 | |
openshift-monitoring |
default-scheduler |
thanos-querier-7d7bcd498-w2pfb |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-7d7bcd498-w2pfb to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-5339u6k6jn3h3 -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-5339u6k6jn3h3 -n openshift-monitoring because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" in 14.224s (14.224s including waiting). Image size: 440559529 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Created |
Created container: kube-state-metrics | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" in 14.224s (14.224s including waiting). Image size: 440559529 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7f758fb97d to 0 from 1 | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-598f995956 |
SuccessfulCreate |
Created pod: route-controller-manager-598f995956-qbmvv | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f758fb97d-qmbkd |
Killing |
Stopping container route-controller-manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f758fb97d |
SuccessfulDelete |
Deleted pod: route-controller-manager-7f758fb97d-qmbkd | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-8c858dd9d to 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-8c858dd9d |
SuccessfulCreate |
Created pod: metrics-server-8c858dd9d-j8mx9 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
metrics-server-8c858dd9d-j8mx9 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-8c858dd9d-j8mx9 to master-0 | |
openshift-monitoring |
default-scheduler |
metrics-server-8c858dd9d-j8mx9 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-8c858dd9d-j8mx9 to master-0 | |
openshift-monitoring |
replicaset-controller |
metrics-server-8c858dd9d |
SuccessfulCreate |
Created pod: metrics-server-8c858dd9d-j8mx9 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-8c858dd9d to 1 | |
openshift-controller-manager |
kubelet |
controller-manager-58fff6b545-fvbrw |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-58fff6b545 |
SuccessfulDelete |
Deleted pod: controller-manager-58fff6b545-fvbrw | |
openshift-controller-manager |
default-scheduler |
controller-manager-67d4b5c54d-v56p6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-7c6b76c555 to 1 | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-7c6b76c555 |
SuccessfulCreate |
Created pod: networking-console-plugin-7c6b76c555-dwqmc | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-598f995956 to 1 from 0 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-network-console |
default-scheduler |
networking-console-plugin-7c6b76c555-dwqmc |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-7c6b76c555-dwqmc to master-0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-67d4b5c54d |
SuccessfulCreate |
Created pod: controller-manager-67d4b5c54d-v56p6 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-58fff6b545 to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-67d4b5c54d to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-monitoring |
multus |
thanos-querier-7d7bcd498-w2pfb |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-8c858dd9d-j8mx9 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" in 15.385s (15.385s including waiting). Image size: 431974228 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" in 15.547s (15.547s including waiting). Image size: 437909443 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-network-console |
multus |
networking-console-plugin-7c6b76c555-dwqmc |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Created |
Created container: kube-rbac-proxy-self | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-dwqmc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a22978e1669cdbaeee6ec0800f83559b56a2344f1c003f8cd60f27fac939680e" | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
multus |
thanos-querier-7d7bcd498-w2pfb |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-5d7d9df6f8 to 1 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-5d7d9df6f8 |
SuccessfulCreate |
Created pod: monitoring-plugin-5d7d9df6f8-qwngc | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-598f995956-qbmvv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager |
default-scheduler |
controller-manager-67d4b5c54d-v56p6 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-67d4b5c54d-v56p6 to master-0 | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-5d7d9df6f8-qwngc |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-5d7d9df6f8-qwngc to master-0 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
multus |
metrics-server-8c858dd9d-j8mx9 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-vjbnk |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-5d7d9df6f8-qwngc |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-5d7d9df6f8-qwngc to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-5d7d9df6f8 |
SuccessfulCreate |
Created pod: monitoring-plugin-5d7d9df6f8-qwngc | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-5d7d9df6f8 to 1 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" in 15.385s (15.385s including waiting). Image size: 431974228 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" in 15.547s (15.547s including waiting). Image size: 437909443 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-gh4px |
Created |
Created container: openshift-state-metrics | |
openshift-controller-manager |
kubelet |
controller-manager-67d4b5c54d-v56p6 |
ProbeError |
Readiness probe error: Get "https://10.128.0.88:8443/healthz": dial tcp 10.128.0.88:8443: connect: connection refused body: | |
openshift-monitoring |
multus |
monitoring-plugin-5d7d9df6f8-qwngc |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
| (x3) | openshift-authentication |
default-scheduler |
oauth-openshift-848cd9b885-hcbh9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager |
multus |
controller-manager-67d4b5c54d-v56p6 |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-67d4b5c54d-v56p6 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.88:8443/healthz": dial tcp 10.128.0.88:8443: connect: connection refused | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" | |
openshift-monitoring |
multus |
monitoring-plugin-5d7d9df6f8-qwngc |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-d595851a2beee88c421837a26a7bcd23 successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-598f995956-qbmvv |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-598f995956-qbmvv to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" in 4.205s (4.205s including waiting). Image size: 502712961 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" in 4.205s (4.205s including waiting). Image size: 502712961 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-dwqmc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a22978e1669cdbaeee6ec0800f83559b56a2344f1c003f8cd60f27fac939680e" in 4.058s (4.058s including waiting). Image size: 446952788 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-4i3vpe46p0rrq -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" in 4.189s (4.189s including waiting). Image size: 471431303 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" in 3.646s (3.646s including waiting). Image size: 447814986 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" in 4.189s (4.189s including waiting). Image size: 471431303 bytes. | |
| (x4) | openshift-console-operator |
kubelet |
console-operator-76b6568d85-grltt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" in 3.646s (3.646s including waiting). Image size: 447814986 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-4i3vpe46p0rrq -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Started |
Started container monitoring-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-dwqmc |
Created |
Created container: networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-dwqmc |
Started |
Started container networking-console-plugin | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-5d7d9df6f8-qwngc |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Started |
Started container metrics-server | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy-web | |
openshift-route-controller-manager |
multus |
route-controller-manager-598f995956-qbmvv |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-598f995956-qbmvv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-598f995956-qbmvv |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-598f995956-qbmvv |
Started |
Started container route-controller-manager | |
openshift-monitoring |
kubelet |
metrics-server-8c858dd9d-j8mx9 |
Created |
Created container: metrics-server | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-76b6568d85-grltt_32a2fb62-55cc-4119-8919-5babf9c87e2f became leader | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.35" | |
openshift-console |
default-scheduler |
downloads-66b8ffb895-7n68q |
Scheduled |
Successfully assigned openshift-console/downloads-66b8ffb895-7n68q to master-0 | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/config has changed" | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-66b8ffb895 to 1 | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-598f995956-qbmvv_2efbc633-65c1-41d3-8a56-3d1f3909d4e3 became leader | |
openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.80:6443/healthz": dial tcp 10.128.0.80:6443: connect: connection refused | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from Unknown to False ("All is well"),Progressing changed from Unknown to False ("All is well") | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-967b7967b-mb725 |
ProbeError |
Readiness probe error: Get "https://10.128.0.80:6443/healthz": dial tcp 10.128.0.80:6443: connect: connection refused body: | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console |
replicaset-controller |
downloads-66b8ffb895 |
SuccessfulCreate |
Created pod: downloads-66b8ffb895-7n68q | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" in 5.745s (5.745s including waiting). Image size: 467542663 bytes. | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" in 5.745s (5.745s including waiting). Image size: 467542663 bytes. | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-authentication |
default-scheduler |
oauth-openshift-848cd9b885-hcbh9 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-848cd9b885-hcbh9 to master-0 | |
openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ddc5283caf2ced75a94ddf0e8a43c431889692007e8a875a187b25c35b45a9e2" | |
openshift-console |
multus |
downloads-66b8ffb895-7n68q |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" in 3.971s (3.971s including waiting). Image size: 413104068 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-848cd9b885-hcbh9 |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-848cd9b885-hcbh9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" in 3.971s (3.971s including waiting). Image size: 413104068 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-authentication |
kubelet |
oauth-openshift-848cd9b885-hcbh9 |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-848cd9b885-hcbh9 |
Started |
Started container oauth-openshift | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-monitoring |
kubelet |
thanos-querier-7d7bcd498-w2pfb |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" in 6.198s (6.198s including waiting). Image size: 605698193 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" in 6.198s (6.198s including waiting). Image size: 605698193 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
replicaset-controller |
console-cdc9755cd |
SuccessfulCreate |
Created pod: console-cdc9755cd-fl679 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-cdc9755cd to 1 | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-console |
default-scheduler |
console-cdc9755cd-fl679 |
Scheduled |
Successfully assigned openshift-console/console-cdc9755cd-fl679 to master-0 | |
openshift-console |
multus |
console-cdc9755cd-fl679 |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-cdc9755cd-fl679 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-cdc9755cd-fl679 |
Started |
Started container console | |
openshift-console |
kubelet |
console-cdc9755cd-fl679 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" in 4.399s (4.399s including waiting). Image size: 633877280 bytes. | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-console |
kubelet |
console-cdc9755cd-fl679 |
Created |
Created container: console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-console |
replicaset-controller |
console-697d79fb97 |
SuccessfulCreate |
Created pod: console-697d79fb97-jrvk4 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-console |
default-scheduler |
console-697d79fb97-jrvk4 |
Scheduled |
Successfully assigned openshift-console/console-697d79fb97-jrvk4 to master-0 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-697d79fb97 to 1 | |
openshift-console |
kubelet |
console-697d79fb97-jrvk4 |
Created |
Created container: console | |
openshift-authentication |
kubelet |
oauth-openshift-848cd9b885-hcbh9 |
Killing |
Stopping container oauth-openshift | |
openshift-console |
kubelet |
console-697d79fb97-jrvk4 |
Started |
Started container console | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-848cd9b885 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-848cd9b885 |
SuccessfulDelete |
Deleted pod: oauth-openshift-848cd9b885-hcbh9 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-697d79fb97-jrvk4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-console |
multus |
console-697d79fb97-jrvk4 |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-authentication |
replicaset-controller |
oauth-openshift-5455ddcb95 |
SuccessfulCreate |
Created pod: oauth-openshift-5455ddcb95-p88pn | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/config has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-5455ddcb95 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.35, 0 replicas available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-5e4c5dbc6c5ab0df96aa59d42e08702b | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-5e4c5dbc6c5ab0df96aa59d42e08702b | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
| (x2) | openshift-authentication |
default-scheduler |
oauth-openshift-5455ddcb95-p88pn |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
Started |
Started container download-server | |
openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
Created |
Created container: download-server | |
openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ddc5283caf2ced75a94ddf0e8a43c431889692007e8a875a187b25c35b45a9e2" in 43.132s (43.132s including waiting). Image size: 2895807090 bytes. | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-authentication |
default-scheduler |
oauth-openshift-5455ddcb95-p88pn |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-5455ddcb95-p88pn to master-0 | |
openshift-authentication |
multus |
oauth-openshift-5455ddcb95-p88pn |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
Created |
Created container: oauth-openshift | |
| (x4) | openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.91:8080/": dial tcp 10.128.0.91:8080: connect: connection refused |
openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.91:8080/": dial tcp 10.128.0.91:8080: connect: connection refused | |
openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
ProbeError |
Liveness probe error: Get "http://10.128.0.91:8080/": dial tcp 10.128.0.91:8080: connect: connection refused body: | |
openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
Started |
Started container oauth-openshift | |
| (x4) | openshift-console |
kubelet |
downloads-66b8ffb895-7n68q |
ProbeError |
Readiness probe error: Get "http://10.128.0.91:8080/": dial tcp 10.128.0.91:8080: connect: connection refused body: |
openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
ProbeError |
Readiness probe error: Get "https://10.128.0.97:6443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.97:6443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.97:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x3) | openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
ProbeError |
Readiness probe error: Get "https://10.128.0.97:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-dddff6458-6fzwb_0911920c-f742-4a28-82fa-209d6c85ce74 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.35} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015}] |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0 I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0319 09:21:26.115327 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0319 09:21:26.115340 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0 F0319 09:22:10.123024 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-67dcd4998-wrdwm_75a7f3ea-211d-4317-9ea5-c0149726d599 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.35"} {"oauth-apiserver" "4.18.35"}] to [{"operator" "4.18.35"} {"oauth-apiserver" "4.18.35"} {"oauth-openshift" "4.18.35_openshift"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.35_openshift" | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "All is well" |
| (x2) | openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful |
| (x2) | openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
telemeter-client-678cbbd786 |
SuccessfulCreate |
Created pod: telemeter-client-678cbbd786-bf7l4 | |
openshift-monitoring |
default-scheduler |
telemeter-client-678cbbd786-bf7l4 |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-678cbbd786-bf7l4 to master-0 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy-thanos | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
default-scheduler |
telemeter-client-678cbbd786-bf7l4 |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-678cbbd786-bf7l4 to master-0 | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-678cbbd786 to 1 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-678cbbd786 |
SuccessfulCreate |
Created pod: telemeter-client-678cbbd786-bf7l4 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-678cbbd786 to 1 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
multus |
telemeter-client-678cbbd786-bf7l4 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" | |
openshift-monitoring |
multus |
telemeter-client-678cbbd786-bf7l4 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
| (x2) | openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful |
| (x2) | openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" in 2.35s (2.35s including waiting). Image size: 480540851 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" in 2.35s (2.35s including waiting). Image size: 480540851 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-678cbbd786-bf7l4 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.35 because: failed to apply machine config controller manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/machine-config-controller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-5e4c5dbc6c5ab0df96aa59d42e08702b to Done | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-5e4c5dbc6c5ab0df96aa59d42e08702b | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-5e4c5dbc6c5ab0df96aa59d42e08702b and node has been uncordoned | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/image.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-retry-1-master-0": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
BackOff |
Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36) | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327) |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x12) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.35 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
| (x2) | kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_6fa79b66-d5fb-413d-a629-f98ef8ab8958 became leader | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints |
| (x10) | openshift-console |
kubelet |
console-cdc9755cd-fl679 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused |
| (x2) | kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler |
| (x2) | kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x10) | openshift-console |
kubelet |
console-697d79fb97-jrvk4 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.95:8443/health": dial tcp 10.128.0.95:8443: connect: connection refused |
| (x10) | openshift-console |
kubelet |
console-697d79fb97-jrvk4 |
ProbeError |
Startup probe error: Get "https://10.128.0.95:8443/health": dial tcp 10.128.0.95:8443: connect: connection refused body: |
| (x11) | openshift-console |
kubelet |
console-cdc9755cd-fl679 |
ProbeError |
Startup probe error: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused body: |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"14358\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 25, 5, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004d8b620), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_208b988e-762e-4cbc-89c5-cbbbf7884d8e became leader | |
openshift-kube-scheduler |
multus |
installer-4-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-retry-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-console |
replicaset-controller |
console-cdc9755cd |
SuccessfulDelete |
Deleted pod: console-cdc9755cd-fl679 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-cdc9755cd to 0 from 1 | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_1fb0966f-f780-4392-8e23-a99203fdd826 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-ff989d6cc-rcnnp_d8ba7e7c-5778-46c8-87c2-ecfcd2b1099d became leader | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-7988f8bb7 to 1 | |
openshift-console |
replicaset-controller |
console-7988f8bb7 |
SuccessfulCreate |
Created pod: console-7988f8bb7-j9w48 | |
openshift-console |
multus |
console-7988f8bb7-j9w48 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-7988f8bb7-j9w48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
kubelet |
console-7988f8bb7-j9w48 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-7988f8bb7-j9w48 |
Started |
Started container console | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309639 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.309648 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0 I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0319 09:21:33.309639 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0319 09:21:33.309648 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-console |
kubelet |
console-697d79fb97-jrvk4 |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-697d79fb97 |
SuccessfulDelete |
Deleted pod: console-697d79fb97-jrvk4 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-697d79fb97 to 0 from 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-8544cbcf9c-ct498_8780d930-5ec4-4180-8ef9-77335e1b8186 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries") | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-67d4b5c54d-v56p6 became leader | |
openshift-kube-controller-manager |
multus |
installer-2-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Started |
Started container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 1 because static pod is ready | |
openshift-console |
replicaset-controller |
console-c75dc494b |
SuccessfulCreate |
Created pod: console-c75dc494b-tvf5c | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-c75dc494b to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-console |
kubelet |
console-c75dc494b-tvf5c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
multus |
console-c75dc494b-tvf5c |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-console |
replicaset-controller |
console-c75dc494b |
SuccessfulDelete |
Deleted pod: console-c75dc494b-tvf5c | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-c75dc494b to 0 from 1 | |
openshift-console |
kubelet |
console-c75dc494b-tvf5c |
Created |
Created container: console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-54cf565479 to 1 from 0 | |
openshift-console |
kubelet |
console-c75dc494b-tvf5c |
Started |
Started container console | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-console |
replicaset-controller |
console-54cf565479 |
SuccessfulCreate |
Created pod: console-54cf565479-phtrp | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-console |
kubelet |
console-c75dc494b-tvf5c |
Killing |
Stopping container console | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_4a0c23b8-1a81-41a5-89d4-b11fbf707fc9 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.35"}] to [{"raw-internal" "4.18.35"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.35"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-console |
kubelet |
console-54cf565479-phtrp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
kubelet |
console-54cf565479-phtrp |
Started |
Started container console | |
openshift-console |
kubelet |
console-54cf565479-phtrp |
Created |
Created container: console | |
openshift-console |
multus |
console-54cf565479-phtrp |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: " | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'") | |
openshift-console |
kubelet |
console-7988f8bb7-j9w48 |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-7988f8bb7 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-7988f8bb7 |
SuccessfulDelete |
Deleted pod: console-7988f8bb7-j9w48 | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
static-pod-installer |
installer-2-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309639 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.309648 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309639 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.309648 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14" |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.35"}] to [{"raw-internal" "4.18.35"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.35"}] | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.35" |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_6d6d690e-7b50-471c-b5fc-930a9028ec1e became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
| (x18) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.35" |
| (x18) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309639 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.309648 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309639 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.309648 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: " to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:33.214704 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309569 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:33.309639 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.309648 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:33.312385 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0319 09:21:57.317851 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:17.316683 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:37.314343 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0319 09:22:51.314896 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0319 09:22:51.314973 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 2 because static pod is ready | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-b865698dc-wwkqz_47b70232-2354-48ea-902d-79b46a1728b8 became leader | |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-kqb2h |
Created |
Created container: approver | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
| (x4) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-wshz8 |
BackOff |
Back-off restarting failed container insights-operator in pod insights-operator-68bf6ff9d6-wshz8_openshift-insights(0cb70a30-a8d1-4037-81e6-eb4f0510a234) |
| (x4) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-wshz8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" already present on machine |
| (x4) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-wshz8 |
Started |
Started container insights-operator |
| (x4) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-wshz8 |
Created |
Created container: insights-operator |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Created |
Created container: manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-7wdws |
Created |
Created container: manager | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" already present on machine | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-6qck2 |
Started |
Started container marketplace-operator | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Created |
Created container: manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-pn5gg |
Started |
Started container manager | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" already present on machine |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Created |
Created container: cluster-cloud-controller-manager |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Started |
Started container cluster-cloud-controller-manager |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" already present on machine |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Created |
Created container: config-sync-controllers |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-rz5nt |
Started |
Started container config-sync-controllers |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdd28dfe7132e19af9f013f72cf120d970bc31b6b74693af262f8d2e82a096e1" already present on machine |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
Started |
Started container machine-approver-controller |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-cscz5 |
Created |
Created container: machine-approver-controller |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-67d4b5c54d-v56p6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-67d4b5c54d-v56p6 |
Created |
Created container: controller-manager |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-67d4b5c54d-v56p6 |
Started |
Started container controller-manager |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-l8kmn |
Started |
Started container control-plane-machine-set-operator | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-r75tv |
Started |
Started container ovnkube-cluster-manager | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Created |
Created container: machine-api-operator |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Started |
Started container machine-api-operator |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Created |
Created container: machine-api-operator |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-qx75g |
Started |
Started container machine-api-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Created |
Created container: cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Started |
Started container cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Started |
Started container cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Created |
Created container: cluster-baremetal-operator |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d" already present on machine |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_661d0257-94a1-459b-bf65-f88087c05633 became leader | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-67d4b5c54d-v56p6 became leader | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/console\": dial tcp 172.30.0.1:443: connect: connection refused",Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.35, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected",Available changed from False to True ("All is well"),Upgradeable changed from True to False ("DownloadsCustomRouteSyncUpgradeable: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused") | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-57f769d897-r75tv became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"14358\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 25, 5, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004d8b620), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)",Upgradeable message changed from "DownloadsCustomRouteSyncUpgradeable: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused" to "DownloadsCustomRouteSyncUpgradeable: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/console\": dial tcp 172.30.0.1:443: connect: connection refused" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)",Progressing changed from True to False ("All is well") | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsCustomRouteSyncDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)",Upgradeable changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nResourceSyncControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdateFailed |
Failed to update ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: Timeout: request did not complete within requested timeout - context deadline exceeded |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.apps.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.authorization.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.build.openshift.io)]") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: Put \"https://172.30.0.1:443/apis/operator.openshift.io/v1/consoles/cluster/status\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded message changed from "All is well" to "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" |
openshift-insights |
openshift-insights-operator |
openshift-insights |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Timeout: request did not complete within requested timeout - context deadline exceeded |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nResourceSyncControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded message changed from "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-scheduler-installer)\nBackingResourceControllerDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: " | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Timeout: request did not complete within requested timeout - context deadline exceeded |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \nKubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" | |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-64854d9cff-dzfgb_openshift-cluster-storage-operator(e3376275-294d-446d-9b4c-930df60dba01) |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " | |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24" already present on machine |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Started |
Started container snapshot-controller |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-dzfgb |
Created |
Created container: snapshot-controller |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-64854d9cff-dzfgb |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-64854d9cff-dzfgb became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigWriteError |
Failed to write observed config: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services downloads)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)" to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7030c5cce"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ - "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ - "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5004457a"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "ConfigObservationDegraded: error writing updated observed config: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.35 because: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets master-user-data-managed) | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well" | |
openshift-etcd-operator |
prometheus-controller |
etcd |
InvalidConfiguration |
ServiceMonitor etcd was rejected due to invalid configuration: failed to get cert "<secret=etcd-metric-client,key=tls.crt>": unable to get secret "etcd-metric-client": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-metric-client) | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \nAPIServerDeploymentDegraded: deployment/openshift-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/openshift-apiserver: could not be retrieved"),Available changed from True to False ("APIServerDeploymentAvailable: deployment/openshift-apiserver: could not be retrieved") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded message changed from "All is well" to "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)\nTargetConfigControllerDegraded: \"configmap/client-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nTargetConfigControllerDegraded: \"configmap/kube-scheduler-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets serving-cert)" | |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-6f69995874-nm9nx_openshift-machine-api(cd42096c-f18d-4bb5-8a51-8761dc1edb73) |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-6f69995874-nm9nx_openshift-machine-api(cd42096c-f18d-4bb5-8a51-8761dc1edb73) |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 1 on node "master-0": the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-retry-1-master-0) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nTargetConfigControllerDegraded: \"configmap/kube-scheduler-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets serving-cert)" to "InstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-retry-1-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nTargetConfigControllerDegraded: \"configmap/kube-scheduler-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets serving-cert)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved"),Available changed from True to False ("APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \nKubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \nKubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/etcd-client-5 -n openshift-kube-apiserver: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" already present on machine |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-nm9nx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" already present on machine |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \nKubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" to "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" | |
openshift-network-node-identity |
master-0_85b80673-7e24-459c-82b5-541b62051b30 |
ovnkube-identity |
LeaderElection |
master-0_85b80673-7e24-459c-82b5-541b62051b30 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" to "All is well" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)" to "OAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)" to "ConfigMapSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request\nOAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/service-account-private-key has changed,required secret/localhost-recovery-client-token has changed" | |
| (x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Operation cannot be fulfilled on secrets "service-account-private-key": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded message changed from "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)" to "All is well" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \nAPIServerDeploymentDegraded: deployment/openshift-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: " to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: ",Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-retry-1-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nTargetConfigControllerDegraded: \"configmap/kube-scheduler-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets serving-cert)" to "InstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-retry-1-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets webhook-authentication-integrated-oauth)\nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretUpdated |
Updated Secret/v4-0-config-system-session -n openshift-authentication because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nTargetConfigControllerDegraded: \"configmap/config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)\nTargetConfigControllerDegraded: \"configmap/client-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: " to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-missingstaticpodcontroller |
etcd-operator |
MissingStaticPod |
static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStatePodsDegraded: Unhealthy pods found: error getting pod \"oauth-openshift-5455ddcb95-p88pn\": the server was unable to return a response in the time allotted, but may still be processing the request (get pods oauth-openshift-5455ddcb95-p88pn)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: \nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-retry-1-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" to "InstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-retry-1-master-0)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0319 09:21:26.103934 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115262 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0319 09:21:26.115327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.115340 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0319 09:21:26.118650 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0319 09:21:56.119009 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0319 09:22:10.123024 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nServiceSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get services console)" to "OAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 4 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: "),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:useroauthaccesstoken-manager)\nAPIServerStaticResourcesDegraded: " to "All is well" | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-0)" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/service-account-private-key has changed,required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Timeout: request did not complete within requested timeout - context deadline exceeded\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts catalogd-controller-manager)\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-manager-role)\nCatalogdStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts catalogd-controller-manager)\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-manager-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-editor-role)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts catalogd-controller-manager)\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-manager-role)\nCatalogdStaticResourcesDegraded: " | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)\nTargetConfigControllerDegraded: \"secrets/csr-signer\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets csr-signer)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "All is well" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-node-reader)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-crd-reader)\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-598f995956 |
SuccessfulDelete |
Deleted pod: route-controller-manager-598f995956-qbmvv | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-69bfd98cf to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-55f5cd545d |
SuccessfulCreate |
Created pod: route-controller-manager-55f5cd545d-pkh9v | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-55f5cd545d to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-598f995956 to 0 from 1 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-5455ddcb95 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-69bfd98cf |
SuccessfulCreate |
Created pod: oauth-openshift-69bfd98cf-4dhhm | |
openshift-controller-manager |
kubelet |
controller-manager-67d4b5c54d-v56p6 |
Killing |
Stopping container controller-manager | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" |
openshift-authentication |
replicaset-controller |
oauth-openshift-5455ddcb95 |
SuccessfulDelete |
Deleted pod: oauth-openshift-5455ddcb95-p88pn | |
openshift-controller-manager |
replicaset-controller |
controller-manager-67d4b5c54d |
SuccessfulDelete |
Deleted pod: controller-manager-67d4b5c54d-v56p6 | |
openshift-authentication |
kubelet |
oauth-openshift-5455ddcb95-p88pn |
Killing |
Stopping container oauth-openshift | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-598f995956-qbmvv |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-57bfdb854 |
SuccessfulCreate |
Created pod: controller-manager-57bfdb854-c5vtx | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-67d4b5c54d to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server",Available changed from True to False ("OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-57bfdb854 to 1 from 0 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts catalogd-controller-manager)\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io catalogd-manager-role)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-editor-role)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-editor-role)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-editor-role)\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-node-reader)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-crd-reader)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Killing |
Stopping container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well") | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)" to "OAuthClientsControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console)" | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-l8kmn_d670ee17-8e17-4d74-9181-035c8d71029f |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-6f97756bc8-l8kmn_d670ee17-8e17-4d74-9181-035c8d71029f became leader | |
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-l8kmn_d670ee17-8e17-4d74-9181-035c8d71029f |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-6f97756bc8-l8kmn_d670ee17-8e17-4d74-9181-035c8d71029f became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 1 to 2 because static pod is ready | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_a3bc7b74-2770-466e-bc09-d7393b95d71a became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_fec5cb4b-12fe-43f1-a6fb-af3d1eaf5be7 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-cluster-machine-approver |
master-0_fc0db17a-a346-4e98-adfa-554d4422bf81 |
cluster-machine-approver-leader |
LeaderElection |
master-0_fc0db17a-a346-4e98-adfa-554d4422bf81 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 2 to 3 because static pod is ready | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_9616fa56-f5e5-4941-a8ab-65b006b007fa became leader | |
openshift-cloud-controller-manager-operator |
master-0_fa9bc499-1ad0-41da-aed6-fbf84b3e9b9a |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_fa9bc499-1ad0-41da-aed6-fbf84b3e9b9a became leader | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-cloud-controller-manager-operator |
master-0_64656f4a-de08-42ba-8bf2-1e009086e7a1 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_64656f4a-de08-42ba-8bf2-1e009086e7a1 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_672cd2c2-637e-4945-a42b-06379d8992ce became leader | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-7wdws_b86b42cd-c11a-40dd-9bee-18650dda58d5 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-7wdws_b86b42cd-c11a-40dd-9bee-18650dda58d5 became leader | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-7wdws_b86b42cd-c11a-40dd-9bee-18650dda58d5 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-7wdws_b86b42cd-c11a-40dd-9bee-18650dda58d5 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"16338\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 29, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0033d5cc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-operator-controller |
operator-controller-controller-manager-57777556ff-pn5gg_dd6d7709-75a5-427b-a8db-c39fce757e20 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-57777556ff-pn5gg_dd6d7709-75a5-427b-a8db-c39fce757e20 became leader | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-nm9nx_1ca77248-05ca-4529-8c87-f508fd20cc2f |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-nm9nx_1ca77248-05ca-4529-8c87-f508fd20cc2f became leader | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-nm9nx_1ca77248-05ca-4529-8c87-f508fd20cc2f |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-nm9nx_1ca77248-05ca-4529-8c87-f508fd20cc2f became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_2079cef4-3d72-400f-a57b-86582feee444 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 4 to 5 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_d7d5978d-911c-4516-aaac-3405dfc41df1 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-qd25m | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"16338\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 29, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0033d5cc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"16338\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 29, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0033d5cc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_7ead7cf9-6018-48a9-9f30-a39c87b94d2a became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"43fb8e3c-6505-401c-a4a3-bc00797a9a85\", ResourceVersion:\"16338\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 19, 9, 12, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 19, 9, 29, 10, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0033d5cc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-controller-manager |
kubelet |
controller-manager-57bfdb854-c5vtx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_37f74fff-47c4-4744-a27f-f4b145218df9 became leader | |
openshift-controller-manager |
multus |
controller-manager-57bfdb854-c5vtx |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
node-ca-qd25m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60637f6eed5e9adc3af1863d0ef311c74b9109f00f464f9ce6cdfd21d0ee4608" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-55f5cd545d-pkh9v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-69bfd98cf-4dhhm |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-57bfdb854-c5vtx |
Started |
Started container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-55f5cd545d-pkh9v |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-55f5cd545d-pkh9v_cf3f9c3d-d0a9-4e2a-8ce7-760e892ba799 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-57bfdb854-c5vtx |
Created |
Created container: controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-57bfdb854-c5vtx became leader | |
openshift-route-controller-manager |
multus |
route-controller-manager-55f5cd545d-pkh9v |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-55f5cd545d-pkh9v |
Started |
Started container route-controller-manager | |
openshift-authentication |
kubelet |
oauth-openshift-69bfd98cf-4dhhm |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-69bfd98cf-4dhhm |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-69bfd98cf-4dhhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" already present on machine | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.43.224:443/healthz\": dial tcp 172.30.43.224:443: connect: connection refused" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node." | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") |
openshift-image-registry |
kubelet |
node-ca-qd25m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60637f6eed5e9adc3af1863d0ef311c74b9109f00f464f9ce6cdfd21d0ee4608" in 2.807s (2.807s including waiting). Image size: 481636992 bytes. | |
openshift-image-registry |
kubelet |
node-ca-qd25m |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-qd25m |
Started |
Started container node-ca | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7030c5cce"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5004457a"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-57bfdb854 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-57dc475b7c to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-55f5cd545d to 0 from 1 | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml |
| (x4) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
| (x4) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-57dc475b7c |
SuccessfulCreate |
Created pod: route-controller-manager-57dc475b7c-7h2xd | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-55f5cd545d |
SuccessfulDelete |
Deleted pod: route-controller-manager-55f5cd545d-pkh9v | |
openshift-controller-manager |
kubelet |
controller-manager-57bfdb854-c5vtx |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5cbdcbd8d7 to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-57bfdb854 |
SuccessfulDelete |
Deleted pod: controller-manager-57bfdb854-c5vtx | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5cbdcbd8d7 |
SuccessfulCreate |
Created pod: controller-manager-5cbdcbd8d7-wz2vj | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-55f5cd545d-pkh9v |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well") | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-57dc475b7c-7h2xd_dd121be4-0bf8-42aa-9a5a-e9b3f9718c33 became leader | |
openshift-route-controller-manager |
multus |
route-controller-manager-57dc475b7c-7h2xd |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-57dc475b7c-7h2xd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-5cbdcbd8d7-wz2vj |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-5cbdcbd8d7-wz2vj |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-5cbdcbd8d7-wz2vj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-5cbdcbd8d7-wz2vj |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-57dc475b7c-7h2xd |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-57dc475b7c-7h2xd |
Started |
Started container route-controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5cbdcbd8d7-wz2vj became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
sushy-emulator |
replicaset-controller |
sushy-emulator-59477995f9 |
SuccessfulCreate |
Created pod: sushy-emulator-59477995f9-w2dvk | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-59477995f9 to 1 | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-w2dvk |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1773400388" | |
sushy-emulator |
multus |
sushy-emulator-59477995f9-w2dvk |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-w2dvk |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-w2dvk |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1773400388" in 6.435s (6.435s including waiting). Image size: 326085552 bytes. | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-w2dvk |
Created |
Created container: sushy-emulator | |
sushy-emulator |
deployment-controller |
nova-console-poller |
ScalingReplicaSet |
Scaled up replica set nova-console-poller-676c49b655 to 1 | |
sushy-emulator |
replicaset-controller |
nova-console-poller-676c49b655 |
SuccessfulCreate |
Created pod: nova-console-poller-676c49b655-wglrh | |
sushy-emulator |
multus |
nova-console-poller-676c49b655-wglrh |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Created |
Created container: console-poller-40e11878-552d-4f68-b67c-51eab43a3d28 | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.355s (5.355s including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Started |
Started container console-poller-40e11878-552d-4f68-b67c-51eab43a3d28 | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Created |
Created container: console-poller-2703ba59-c3ba-4b7d-a1f0-0109a4742d59 | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Started |
Started container console-poller-2703ba59-c3ba-4b7d-a1f0-0109a4742d59 | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 488ms (488ms including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-676c49b655-wglrh |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
deployment-controller |
nova-console-recorder |
ScalingReplicaSet |
Scaled up replica set nova-console-recorder-6d7748fc8c to 1 | |
sushy-emulator |
replicaset-controller |
nova-console-recorder-6d7748fc8c |
SuccessfulCreate |
Created pod: nova-console-recorder-6d7748fc8c-9phbj | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
multus |
nova-console-recorder-6d7748fc8c-9phbj |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Created |
Created container: console-recorder-2703ba59-c3ba-4b7d-a1f0-0109a4742d59 | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 403ms (403ms including waiting). Image size: 664134874 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Started |
Started container console-recorder-40e11878-552d-4f68-b67c-51eab43a3d28 | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Started |
Started container console-recorder-2703ba59-c3ba-4b7d-a1f0-0109a4742d59 | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Created |
Created container: console-recorder-40e11878-552d-4f68-b67c-51eab43a3d28 | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
kubelet |
nova-console-recorder-6d7748fc8c-9phbj |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 8.078s (8.078s including waiting). Image size: 664134874 bytes. | |
openshift-machine-api |
deployment-controller |
metal3 |
ScalingReplicaSet |
Scaled up replica set metal3-546c754db to 1 | |
openshift-machine-api |
deployment-controller |
metal3 |
ScalingReplicaSet |
Scaled up replica set metal3-546c754db to 1 | |
openshift-machine-api |
replicaset-controller |
metal3-546c754db |
SuccessfulCreate |
Created pod: metal3-546c754db-8r9wh | |
openshift-machine-api |
replicaset-controller |
metal3-546c754db |
SuccessfulCreate |
Created pod: metal3-546c754db-8r9wh | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" | |
openshift-machine-api |
replicaset-controller |
metal3-baremetal-operator-78474bdc48 |
SuccessfulCreate |
Created pod: metal3-baremetal-operator-78474bdc48-lpxgr | |
openshift-machine-api |
deployment-controller |
metal3-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set metal3-baremetal-operator-78474bdc48 to 1 | |
openshift-machine-api |
replicaset-controller |
metal3-baremetal-operator-78474bdc48 |
SuccessfulCreate |
Created pod: metal3-baremetal-operator-78474bdc48-lpxgr | |
openshift-machine-api |
deployment-controller |
metal3-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set metal3-baremetal-operator-78474bdc48 to 1 | |
| (x2) | openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "baremetal-operator-webhook-server-cert" not found |
| (x2) | openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "baremetal-operator-webhook-server-cert" not found |
openshift-machine-api |
replicaset-controller |
metal3-image-customization-7b5d8dfcfd |
SuccessfulCreate |
Created pod: metal3-image-customization-7b5d8dfcfd-gjzrj | |
openshift-machine-api |
deployment-controller |
metal3-image-customization |
ScalingReplicaSet |
Scaled up replica set metal3-image-customization-7b5d8dfcfd to 1 | |
openshift-machine-api |
multus |
metal3-baremetal-operator-78474bdc48-lpxgr |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openshift-machine-api |
deployment-controller |
metal3-image-customization |
ScalingReplicaSet |
Scaled up replica set metal3-image-customization-7b5d8dfcfd to 1 | |
openshift-machine-api |
replicaset-controller |
metal3-image-customization-7b5d8dfcfd |
SuccessfulCreate |
Created pod: metal3-image-customization-7b5d8dfcfd-gjzrj | |
openshift-machine-api |
multus |
metal3-baremetal-operator-78474bdc48-lpxgr |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3160b9c4d5f4c3af05c6a073a1c590b9679be82d06193a819aaed0a2914e27f7" | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3160b9c4d5f4c3af05c6a073a1c590b9679be82d06193a819aaed0a2914e27f7" | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" | |
openshift-machine-api |
multus |
metal3-image-customization-7b5d8dfcfd-gjzrj |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
metal3-image-customization-7b5d8dfcfd-gjzrj |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openshift-machine-api |
daemonset-controller |
ironic-proxy |
SuccessfulCreate |
Created pod: ironic-proxy-kc5xl | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" | |
openshift-machine-api |
daemonset-controller |
ironic-proxy |
SuccessfulCreate |
Created pod: ironic-proxy-kc5xl | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" | |
openshift-machine-api |
metal3-baremetal-operator-78474bdc48-lpxgr_d7e8022a-4d08-4128-8d4b-c727709a32d7 |
baremetal-operator |
LeaderElection |
metal3-baremetal-operator-78474bdc48-lpxgr_d7e8022a-4d08-4128-8d4b-c727709a32d7 became leader | |
openshift-machine-api |
metal3-baremetal-operator-78474bdc48-lpxgr_d7e8022a-4d08-4128-8d4b-c727709a32d7 |
baremetal-operator |
LeaderElection |
metal3-baremetal-operator-78474bdc48-lpxgr_d7e8022a-4d08-4128-8d4b-c727709a32d7 became leader | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Created |
Created container: metal3-baremetal-operator | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Started |
Started container metal3-baremetal-operator | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Created |
Created container: metal3-baremetal-operator | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3160b9c4d5f4c3af05c6a073a1c590b9679be82d06193a819aaed0a2914e27f7" in 2.951s (2.951s including waiting). Image size: 512688600 bytes. | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3160b9c4d5f4c3af05c6a073a1c590b9679be82d06193a819aaed0a2914e27f7" in 2.951s (2.951s including waiting). Image size: 512688600 bytes. | |
openshift-machine-api |
kubelet |
metal3-baremetal-operator-78474bdc48-lpxgr |
Started |
Started container metal3-baremetal-operator | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" in 11.438s (11.438s including waiting). Image size: 710479311 bytes. | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Created |
Created container: ironic-proxy | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Started |
Started container ironic-proxy | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" in 11.438s (11.438s including waiting). Image size: 710479311 bytes. | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Created |
Created container: ironic-proxy | |
openshift-machine-api |
kubelet |
ironic-proxy-kc5xl |
Started |
Started container ironic-proxy | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" in 35.69s (35.69s including waiting). Image size: 1738357022 bytes. | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" in 31.68s (31.68s including waiting). Image size: 1738357022 bytes. | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" in 35.69s (35.69s including waiting). Image size: 1738357022 bytes. | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" in 31.68s (31.68s including waiting). Image size: 1738357022 bytes. | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: machine-os-images | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container machine-os-images | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: machine-os-images | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container machine-os-images | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" already present on machine | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" already present on machine | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: metal3-httpd | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" already present on machine | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
BackOff |
Back-off restarting failed container machine-os-images in pod metal3-image-customization-7b5d8dfcfd-gjzrj_openshift-machine-api(127c2b98-5be4-46f3-95d6-1901fab637ff) | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: metal3-httpd | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container metal3-httpd | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" already present on machine | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
BackOff |
Back-off restarting failed container machine-os-images in pod metal3-image-customization-7b5d8dfcfd-gjzrj_openshift-machine-api(127c2b98-5be4-46f3-95d6-1901fab637ff) | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container metal3-httpd | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container metal3-ironic | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: metal3-ironic | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container metal3-ironic | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" already present on machine | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: metal3-ironic | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8484f65d4a84230f93c986362dde19aff9b77de01b50e5af1948748b51382001" already present on machine | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: metal3-ramdisk-logs | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container metal3-ramdisk-logs | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Started |
Started container metal3-ramdisk-logs | |
openshift-machine-api |
kubelet |
metal3-546c754db-8r9wh |
Created |
Created container: metal3-ramdisk-logs | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Started |
Started container util | |
| (x2) | openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00db3efdb8113f49d0cf5fac1ce22ba738b29fb7ec51faa94e235d09dcfac70b" already present on machine |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Created |
Created container: util | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f28c3488ff36d9ed2ab6da459a8bead5f5949a4216e12b83f22a26bc502faed1" | |
| (x3) | openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Created |
Created container: machine-os-images |
| (x3) | openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Created |
Created container: machine-os-images |
| (x3) | openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Started |
Started container machine-os-images |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f28c3488ff36d9ed2ab6da459a8bead5f5949a4216e12b83f22a26bc502faed1" | |
| (x3) | openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Started |
Started container machine-os-images |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.014s (1.014s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f28c3488ff36d9ed2ab6da459a8bead5f5949a4216e12b83f22a26bc502faed1" in 3.122s (3.122s including waiting). Image size: 538654221 bytes. | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f28c3488ff36d9ed2ab6da459a8bead5f5949a4216e12b83f22a26bc502faed1" in 3.122s (3.122s including waiting). Image size: 538654221 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tqlc4 |
Started |
Started container extract | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Created |
Created container: machine-image-customization-controller | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Created |
Created container: machine-image-customization-controller | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Started |
Started container machine-image-customization-controller | |
openshift-machine-api |
kubelet |
metal3-image-customization-7b5d8dfcfd-gjzrj |
Started |
Started container machine-image-customization-controller | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
replicaset-controller |
lvms-operator-c6dbd8b78 |
SuccessfulCreate |
Created pod: lvms-operator-c6dbd8b78-6p8rh | |
openshift-storage |
replicaset-controller |
lvms-operator-c6dbd8b78 |
SuccessfulCreate |
Created pod: lvms-operator-c6dbd8b78-6p8rh | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-c6dbd8b78 to 1 | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-c6dbd8b78 to 1 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-storage |
multus |
lvms-operator-c6dbd8b78-6p8rh |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openshift-storage |
multus |
lvms-operator-c6dbd8b78-6p8rh |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.014s (5.014s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-c6dbd8b78-6p8rh |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.014s (5.014s including waiting). Image size: 238305644 bytes. | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
SuccessfulCreate |
Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Created |
Created container: util | |
openshift-marketplace |
multus |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openshift-marketplace |
job-controller |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c166a6a |
SuccessfulCreate |
Created pod: 2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 | |
openshift-marketplace |
multus |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Created |
Created container: util | |
openshift-marketplace |
job-controller |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874832f3 |
SuccessfulCreate |
Created pod: 1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:8d089fd8dd2786d76c87bd470470abb86f06587c447a3b309efe4116911aa11c" | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Started |
Started container util | |
openshift-marketplace |
multus |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Started |
Started container util | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:0a730171e8f18a8286180b7514213248748be998b454d1053b10d047ca51ae1e" | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:8d089fd8dd2786d76c87bd470470abb86f06587c447a3b309efe4116911aa11c" in 1.418s (1.418s including waiting). Image size: 408540 bytes. | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c159x86 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 4.376s (4.376s including waiting). Image size: 108352841 bytes. | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:0a730171e8f18a8286180b7514213248748be998b454d1053b10d047ca51ae1e" in 3.657s (3.657s including waiting). Image size: 255829 bytes. | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Created |
Created container: extract | |
openshift-marketplace |
job-controller |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726f148f |
SuccessfulCreate |
Created pod: 93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c2wdr |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5n72d8 |
Created |
Created container: extract | |
openshift-marketplace |
multus |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:0415e8263a185c51897bcd5d3ac2f5fe68e4818282a2f9dc89f215ee3b9dd1ed" | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Started |
Started container util | |
openshift-marketplace |
job-controller |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c166a6a |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:0415e8263a185c51897bcd5d3ac2f5fe68e4818282a2f9dc89f215ee3b9dd1ed" in 1.253s (1.253s including waiting). Image size: 5243975 bytes. | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874832f3 |
Completed |
Job completed | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726thnjf |
Started |
Started container extract | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-marketplace |
job-controller |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726f148f |
Completed |
Job completed | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsUnknown |
requirements not yet checked |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsUnknown |
requirements not yet checked |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsNotMet |
one or more requirements couldn't be found |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsNotMet |
one or more requirements couldn't be found |
| (x3) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
AllRequirementsMet |
all requirements found, attempting install |
| (x3) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
openshift-nmstate |
replicaset-controller |
nmstate-operator-796d4cfff4 |
SuccessfulCreate |
Created pod: nmstate-operator-796d4cfff4-h6jnz | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-nmstate |
multus |
nmstate-operator-796d4cfff4-h6jnz |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-796d4cfff4 to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-796d4cfff4 to 1 | |
openshift-nmstate |
multus |
nmstate-operator-796d4cfff4-h6jnz |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-796d4cfff4 |
SuccessfulCreate |
Created pod: nmstate-operator-796d4cfff4-h6jnz | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" in 3.747s (3.747s including waiting). Image size: 451496534 bytes. | |
openshift-nmstate |
operator-lifecycle-manager |
install-55mqd |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202603041813" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" in 3.747s (3.747s including waiting). Image size: 451496534 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Started |
Started container nmstate-operator | |
openshift-nmstate |
operator-lifecycle-manager |
install-55mqd |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202603041813" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Started |
Started container nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Created |
Created container: nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-h6jnz |
Created |
Created container: nmstate-operator | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-6d7b76b756 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-6d7b76b756-hw274 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-6d7b76b756 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-6d7b76b756-hw274 | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-6d7b76b756 to 1 | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-6d7b76b756 to 1 | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-754b74fdf5 to 1 | |
metallb-system |
multus |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
metallb-system |
multus |
metallb-operator-controller-manager-6d7b76b756-hw274 |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
metallb-system |
multus |
metallb-operator-controller-manager-6d7b76b756-hw274 |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-754b74fdf5 to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-754b74fdf5 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-754b74fdf5-vvbj2 | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" | |
metallb-system |
multus |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-754b74fdf5 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-754b74fdf5-vvbj2 | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" | |
metallb-system |
operator-lifecycle-manager |
install-zpljv |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202603040208" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
operator-lifecycle-manager |
install-zpljv |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202603040208" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
NeedsReinstall |
calculated deployment install is bad | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
NeedsReinstall |
calculated deployment install is bad | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsUnknown |
requirements not yet checked | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" in 10.045s (10.046s including waiting). Image size: 462537291 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" in 10.019s (10.019s including waiting). Image size: 555122396 bytes. | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
waiting for install components to report healthy |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" in 10.045s (10.046s including waiting). Image size: 462537291 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" in 10.019s (10.019s including waiting). Image size: 555122396 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Created |
Created container: manager | |
metallb-system |
metallb-operator-controller-manager-6d7b76b756-hw274_230d113b-f720-4c89-8e6f-f2321d1ec562 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-6d7b76b756-hw274_230d113b-f720-4c89-8e6f-f2321d1ec562 became leader | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Started |
Started container webhook-server | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6d7b76b756-hw274 |
Created |
Created container: manager | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
metallb-system |
metallb-operator-controller-manager-6d7b76b756-hw274_230d113b-f720-4c89-8e6f-f2321d1ec562 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-6d7b76b756-hw274_230d113b-f720-4c89-8e6f-f2321d1ec562 became leader | |
metallb-system |
kubelet |
metallb-operator-webhook-server-754b74fdf5-vvbj2 |
Started |
Started container webhook-server | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
AllRequirementsMet |
all requirements found, attempting install | |
| (x10) | cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
AllRequirementsMet |
all requirements found, attempting install | |
| (x10) | cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-8ff7d675 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-8ff7d675-wdrhg | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-8ff7d675 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-8ff7d675 to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-8ff7d675 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-8ff7d675-wdrhg | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-6dd7dd855f to 1 | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7d55d7cd7f |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7d55d7cd7f |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-6dd7dd855f to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-7d55d7cd7f to 2 | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" | |
openshift-operators |
multus |
obo-prometheus-operator-8ff7d675-wdrhg |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-7d55d7cd7f to 2 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7d55d7cd7f |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7d55d7cd7f |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc | |
openshift-operators |
multus |
obo-prometheus-operator-8ff7d675-wdrhg |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-z82hq | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
observability-operator-6dd7dd855f |
SuccessfulCreate |
Created pod: observability-operator-6dd7dd855f-lm5gw | |
openshift-operators |
replicaset-controller |
observability-operator-6dd7dd855f |
SuccessfulCreate |
Created pod: observability-operator-6dd7dd855f-lm5gw | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-z82hq | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
multus |
observability-operator-6dd7dd855f-lm5gw |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
replicaset-controller |
perses-operator-f44656786 |
SuccessfulCreate |
Created pod: perses-operator-f44656786-v74wx | |
openshift-operators |
replicaset-controller |
perses-operator-f44656786 |
SuccessfulCreate |
Created pod: perses-operator-f44656786-v74wx | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-f44656786 to 1 | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
multus |
observability-operator-6dd7dd855f-lm5gw |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-f44656786 to 1 | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-mgklh | |
openshift-operators |
multus |
perses-operator-f44656786-v74wx |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
waiting for install components to report healthy | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-operators |
multus |
perses-operator-f44656786-v74wx |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-z82hq |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-z82hq |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-mgklh | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-mgklh |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-mgklh |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
| (x12) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
| (x12) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 10.284s (10.284s including waiting). Image size: 319887149 bytes. | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Started |
Started container perses-operator | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Started |
Started container cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 10.284s (10.284s including waiting). Image size: 319887149 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 11.166s (11.166s including waiting). Image size: 319887149 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" in 13.418s (13.418s including waiting). Image size: 204104155 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 11.166s (11.166s including waiting). Image size: 319887149 bytes. | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" in 11.395s (11.395s including waiting). Image size: 343063302 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 12.159s (12.159s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" in 13.418s (13.418s including waiting). Image size: 204104155 bytes. | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Created |
Created container: cert-manager-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" in 10.511s (10.511s including waiting). Image size: 175801363 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Created |
Created container: prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Started |
Started container cert-manager-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 12.189s (12.189s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 12.159s (12.159s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
perses-operator-f44656786-v74wx |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" in 10.511s (10.511s including waiting). Image size: 175801363 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-9wk96 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 12.189s (12.189s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" in 11.395s (11.395s including waiting). Image size: 343063302 bytes. | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Created |
Created container: cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Created |
Created container: cert-manager-cainjector | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Created |
Created container: prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Created |
Created container: cert-manager-cainjector | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Started |
Started container prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Started |
Started container cert-manager-cainjector | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Started |
Started container operator | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-z82hq |
Started |
Started container cert-manager-cainjector | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7d55d7cd7f-bv7hc |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Created |
Created container: operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-wdrhg |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Started |
Started container operator | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-lm5gw |
Created |
Created container: operator | |
kube-system |
cert-manager-cainjector-5545bd876-z82hq_644f11c6-a201-42c6-be7e-d7621369883b |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-5545bd876-z82hq_644f11c6-a201-42c6-be7e-d7621369883b became leader | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-29rbn | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-29rbn | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
cert-manager |
kubelet |
cert-manager-545d4d4674-29rbn |
Created |
Created container: cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-29rbn |
Created |
Created container: cert-manager-controller | |
cert-manager |
multus |
cert-manager-545d4d4674-29rbn |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-545d4d4674-29rbn |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
cert-manager |
kubelet |
cert-manager-545d4d4674-29rbn |
Started |
Started container cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-29rbn |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
cert-manager |
multus |
cert-manager-545d4d4674-29rbn |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-545d4d4674-29rbn |
Started |
Started container cert-manager-controller | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
install strategy completed with no errors | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-545d4d4674-29rbn-external-cert-manager-controller became leader | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-bcc4b6f68 to 1 | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-bcc4b6f68 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-bcc4b6f68-sfpc9 | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 6e278e53-4dfb-4f7d-86e5-52df689d1a1b] does not exist in namespace "" | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-dttqv | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-dttqv | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-bcc4b6f68 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-bcc4b6f68-sfpc9 | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-bcc4b6f68 to 1 | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-7bb4cc7c98 to 1 | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
multus |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
kubelet |
frr-k8s-dttqv |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
replicaset-controller |
controller-7bb4cc7c98 |
SuccessfulCreate |
Created pod: controller-7bb4cc7c98-jkh97 | |
metallb-system |
replicaset-controller |
controller-7bb4cc7c98 |
SuccessfulCreate |
Created pod: controller-7bb4cc7c98-jkh97 | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-7bb4cc7c98 to 1 | |
| (x2) | metallb-system |
kubelet |
speaker-jkzd2 |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
frr-k8s-dttqv |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
multus |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
| (x2) | metallb-system |
kubelet |
speaker-jkzd2 |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-jkzd2 | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-jkzd2 | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
multus |
controller-7bb4cc7c98-jkh97 |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Created |
Created container: controller | |
metallb-system |
multus |
controller-7bb4cc7c98-jkh97 |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Created |
Created container: controller | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
kubelet |
speaker-jkzd2 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
metallb-system |
kubelet |
speaker-jkzd2 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 1.261s (1.261s including waiting). Image size: 465090934 bytes. | |
metallb-system |
kubelet |
speaker-jkzd2 |
Started |
Started container speaker | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-9b8c8685d to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-9b8c8685d to 1 | |
metallb-system |
kubelet |
speaker-jkzd2 |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-jkzd2 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
metallb-system |
kubelet |
speaker-jkzd2 |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-jkzd2 |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-jkzd2 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 1.261s (1.261s including waiting). Image size: 465090934 bytes. | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-86f58fcf4 to 1 | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-gns5r | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-5f558f5558 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-5f558f5558 |
SuccessfulCreate |
Created pod: nmstate-webhook-5f558f5558-5wgm6 | |
| (x21) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
| (x4) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console |
replicaset-controller |
console-5fdb5b65cd |
SuccessfulCreate |
Created pod: console-5fdb5b65cd-fdkqt | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
metallb-system |
kubelet |
speaker-jkzd2 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 1.428s (1.428s including waiting). Image size: 465090934 bytes. | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-9b8c8685d |
SuccessfulCreate |
Created pod: nmstate-metrics-9b8c8685d-cpgt6 | |
| (x2) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-9b8c8685d |
SuccessfulCreate |
Created pod: nmstate-metrics-9b8c8685d-cpgt6 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well") | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-86f58fcf4 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-86f58fcf4-dlgsc | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-86f58fcf4 to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
metallb-system |
kubelet |
speaker-jkzd2 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 1.428s (1.428s including waiting). Image size: 465090934 bytes. | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-5f558f5558 |
SuccessfulCreate |
Created pod: nmstate-webhook-5f558f5558-5wgm6 | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-86f58fcf4 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-86f58fcf4-dlgsc | |
metallb-system |
kubelet |
controller-7bb4cc7c98-jkh97 |
Started |
Started container kube-rbac-proxy | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5fdb5b65cd to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-5f558f5558 to 1 | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-gns5r | |
openshift-console |
kubelet |
console-5fdb5b65cd-fdkqt |
Created |
Created container: console | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" | |
metallb-system |
kubelet |
speaker-jkzd2 |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
multus |
nmstate-metrics-9b8c8685d-cpgt6 |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
multus |
nmstate-console-plugin-86f58fcf4-dlgsc |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
multus |
nmstate-webhook-5f558f5558-5wgm6 |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-console-plugin-86f58fcf4-dlgsc |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5fdb5b65cd-fdkqt |
Started |
Started container console | |
openshift-console |
multus |
console-5fdb5b65cd-fdkqt |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-jkzd2 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
multus |
nmstate-webhook-5f558f5558-5wgm6 |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-jkzd2 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
multus |
nmstate-metrics-9b8c8685d-cpgt6 |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5fdb5b65cd-fdkqt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
metallb-system |
kubelet |
speaker-jkzd2 |
Started |
Started container kube-rbac-proxy | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available") | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 5.111s (5.111s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 5.111s (5.111s including waiting). Image size: 489111276 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.972s (8.972s including waiting). Image size: 662223062 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.972s (8.972s including waiting). Image size: 662223062 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" in 5.099s (5.099s including waiting). Image size: 453916031 bytes. | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.776s (8.776s including waiting). Image size: 662223062 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 5.122s (5.123s including waiting). Image size: 489111276 bytes. | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.776s (8.776s including waiting). Image size: 662223062 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 6.107s (6.107s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" in 5.099s (5.099s including waiting). Image size: 453916031 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 6.107s (6.107s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 5.122s (5.123s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Started |
Started container nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Created |
Created container: nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-dlgsc |
Started |
Started container nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-sfpc9 |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-handler-gns5r |
Started |
Started container nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: cp-reloader | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-5wgm6 |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-cpgt6 |
Started |
Started container nmstate-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-dttqv |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-54cf565479 to 0 from 1 | |
openshift-console |
kubelet |
console-54cf565479-phtrp |
Killing |
Stopping container console | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.35, 2 replicas available" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") | |
openshift-console |
replicaset-controller |
console-54cf565479 |
SuccessfulDelete |
Deleted pod: console-54cf565479-phtrp | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-jzfd5 | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-jzfd5 | |
openshift-storage |
multus |
vg-manager-jzfd5 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openshift-storage |
multus |
vg-manager-jzfd5 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-jzfd5 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-jzfd5 |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-jzfd5 |
Started |
Started container vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-jzfd5 |
Started |
Started container vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-jzfd5 |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-jzfd5 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-k889w |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-operator-index-k889w |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Created |
Created container: registry-server | |
| (x7) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 993ms (993ms including waiting). Image size: 918642352 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 993ms (993ms including waiting). Image size: 918642352 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-k889w |
Started |
Started container registry-server | |
| (x3) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.28.96:50051: connect: connection refused" |
openstack-operators |
job-controller |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cbbc26 |
SuccessfulCreate |
Created pod: 7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv | |
openstack-operators |
job-controller |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cbbc26 |
SuccessfulCreate |
Created pod: 7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Created |
Created container: util | |
openstack-operators |
multus |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Created |
Created container: util | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Started |
Started container util | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Started |
Started container util | |
openstack-operators |
multus |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 692ms (692ms including waiting). Image size: 115773 bytes. | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Created |
Created container: pull | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 692ms (692ms including waiting). Image size: 115773 bytes. | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Created |
Created container: pull | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Started |
Started container extract | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Created |
Created container: extract | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Started |
Started container pull | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Started |
Started container pull | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Created |
Created container: extract | |
openstack-operators |
kubelet |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cxhrrv |
Started |
Started container extract | |
openstack-operators |
job-controller |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cbbc26 |
Completed |
Job completed | |
openstack-operators |
job-controller |
7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cbbc26 |
Completed |
Job completed | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-b85c4d696 to 1 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-b85c4d696 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-b85c4d696 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-b85c4d696-8qpd5 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-b85c4d696 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-b85c4d696-8qpd5 | |
openstack-operators |
multus |
openstack-operator-controller-init-b85c4d696-8qpd5 |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:5ab774ebf20af59fbaa509688a104dc74488b7cd8c4b0640d16924de8ead64fb" | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:5ab774ebf20af59fbaa509688a104dc74488b7cd8c4b0640d16924de8ead64fb" | |
openstack-operators |
multus |
openstack-operator-controller-init-b85c4d696-8qpd5 |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:5ab774ebf20af59fbaa509688a104dc74488b7cd8c4b0640d16924de8ead64fb" in 5.063s (5.063s including waiting). Image size: 293357387 bytes. | |
openstack-operators |
openstack-operator-controller-init-b85c4d696-8qpd5_8a79c68c-72aa-49cf-aaa4-21ed8a3fcd12 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-b85c4d696-8qpd5_8a79c68c-72aa-49cf-aaa4-21ed8a3fcd12 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Created |
Created container: operator | |
openstack-operators |
openstack-operator-controller-init-b85c4d696-8qpd5_8a79c68c-72aa-49cf-aaa4-21ed8a3fcd12 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-b85c4d696-8qpd5_8a79c68c-72aa-49cf-aaa4-21ed8a3fcd12 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b85c4d696-8qpd5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:5ab774ebf20af59fbaa509688a104dc74488b7cd8c4b0640d16924de8ead64fb" in 5.063s (5.063s including waiting). Image size: 293357387 bytes. | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-vrfpj" | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-4wnnn" | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-7xpgp" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-4wnnn" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-vrfpj" | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-7xpgp" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-md6cl" | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-kg8bc" | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-md6cl" | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-kg8bc" | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-pdcdd" | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-pdcdd" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-85z9c" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-85z9c" | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-htp2c" | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-s8fgp" | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-s8fgp" | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-htp2c" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-5b9f45d989 to 1 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-884679f54 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-884679f54 to 1 | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-588d4d986b to 1 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-588d4d986b |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-588d4d986b-lmp5n | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-884679f54 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-884679f54-7fq2b | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-5784578c99 to 1 | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-5784578c99 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-5784578c99-4tjlx | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-67ccfc9778 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-67ccfc9778-s5trr | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-67ccfc9778 to 1 | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-79df6bcc97 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-79df6bcc97-sq7cg | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-79df6bcc97 to 1 | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-67dd5f86f5 to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-5b9f45d989 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-5b9f45d989-jv72h | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-67dd5f86f5 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-67dd5f86f5-ft2mk | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-5b9f45d989 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-5d488d59fb to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-5d488d59fb |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-5d488d59fb-pw2xk | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-767865f676 |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-767865f676-r78pl | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-588d4d986b to 1 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-588d4d986b |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-588d4d986b-lmp5n | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-79df6bcc97 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-79df6bcc97-sq7cg | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-79df6bcc97 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-8d58dc466 to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-8d58dc466 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-8d58dc466-zvf6m | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-55f864c847 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-55f864c847 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-55f864c847-6n7n9 | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-768b96df4c to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-8464cc45fb |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-8464cc45fb-b8s4c | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-768b96df4c |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-768b96df4c-kh9rb | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-74c4796899 to 1 | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-74c4796899 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-74c4796899dzhg7 | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-8d58dc466 to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-5b9f45d989 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-5b9f45d989-jv72h | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-5784578c99 to 1 | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-5784578c99 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-5784578c99-4tjlx | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-8464cc45fb to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-8d58dc466 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-8d58dc466-zvf6m | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-5d488d59fb to 1 | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-5d488d59fb |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-5d488d59fb-pw2xk | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-74c4796899 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-74c4796899dzhg7 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-7dd6bb94c9 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-7dd6bb94c9-xmlj9 | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-7dd6bb94c9 to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-884679f54 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-884679f54-7fq2b | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-6f787dddc9 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-74c4796899 to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-6f787dddc9 |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-6f787dddc9-qlfpx | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-59bc569d95 to 1 | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-767865f676 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-59bc569d95 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-59bc569d95 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-59bc569d95-j929h | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-767865f676 |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-767865f676-r78pl | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-59bc569d95 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-59bc569d95-j929h | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-67ccfc9778 to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-67dd5f86f5 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-67dd5f86f5-ft2mk | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-6f787dddc9 |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-6f787dddc9-qlfpx | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-6f787dddc9 to 1 | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-67dd5f86f5 to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-67ccfc9778 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-67ccfc9778-s5trr | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-7dd6bb94c9 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-7dd6bb94c9 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-7dd6bb94c9-xmlj9 | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-767865f676 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-8464cc45fb to 1 | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-55f864c847 to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-768b96df4c |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-768b96df4c-kh9rb | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-768b96df4c to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-55f864c847 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-55f864c847-6n7n9 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-8464cc45fb |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-8464cc45fb-b8s4c | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-5c5cb9c4d7 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-d6b694c5 to 1 | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-d6b694c5 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-d6b694c5-j5ggz | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-ktnfh" | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-6c4d75f7f9 to 1 | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-c674c5965 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-c674c5965 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-c674c5965-65d6b | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-bbgx4 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-5c5cb9c4d7 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-5c5cb9c4d7-5znsj | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-86bd8996f6 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-86bd8996f6-8hx4g | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-86bd8996f6 to 1 | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-d6b694c5 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-d6b694c5-j5ggz | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-86bd8996f6 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-86bd8996f6-8hx4g | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-86bd8996f6 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
barbican-operator-controller-manager-59bc569d95-j929h |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-ktnfh" | |
openstack-operators |
multus |
cinder-operator-controller-manager-8d58dc466-zvf6m |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-d6b694c5 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-6c4d75f7f9 |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-6c4d75f7f9-2pmjv | |
openstack-operators |
multus |
barbican-operator-controller-manager-59bc569d95-j929h |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-5c5cb9c4d7 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-5c5cb9c4d7-5znsj | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-c674c5965 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-c674c5965 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-c674c5965-65d6b | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-5c5cb9c4d7 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-hntdt" | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-bbgx4 | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
multus |
cinder-operator-controller-manager-8d58dc466-zvf6m |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-6c4d75f7f9 |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-6c4d75f7f9-2pmjv | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-6c4d75f7f9 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-hntdt" | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" | |
openstack-operators |
multus |
designate-operator-controller-manager-588d4d986b-lmp5n |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
glance-operator-controller-manager-79df6bcc97-sq7cg |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-mtgzv" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-lfvxs" | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-lfvxs" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-mtgzv" | |
openstack-operators |
multus |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
multus |
glance-operator-controller-manager-79df6bcc97-sq7cg |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
multus |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
keystone-operator-controller-manager-768b96df4c-kh9rb |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
keystone-operator-controller-manager-768b96df4c-kh9rb |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
manila-operator-controller-manager-55f864c847-6n7n9 |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" | |
openstack-operators |
multus |
designate-operator-controller-manager-588d4d986b-lmp5n |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
manila-operator-controller-manager-55f864c847-6n7n9 |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" | |
openstack-operators |
multus |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" | |
openstack-operators |
multus |
nova-operator-controller-manager-5d488d59fb-pw2xk |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" | |
openstack-operators |
multus |
swift-operator-controller-manager-c674c5965-65d6b |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-5gttd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" | |
openstack-operators |
multus |
nova-operator-controller-manager-5d488d59fb-pw2xk |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" | |
openstack-operators |
multus |
octavia-operator-controller-manager-5b9f45d989-jv72h |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
multus |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-mttmj" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" | |
openstack-operators |
multus |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
multus |
neutron-operator-controller-manager-767865f676-r78pl |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
placement-operator-controller-manager-5784578c99-4tjlx |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
neutron-operator-controller-manager-767865f676-r78pl |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
octavia-operator-controller-manager-5b9f45d989-jv72h |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
multus |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
multus |
ovn-operator-controller-manager-884679f54-7fq2b |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" | |
openstack-operators |
multus |
placement-operator-controller-manager-5784578c99-4tjlx |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8" | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-mttmj" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-5gttd" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" | |
openstack-operators |
multus |
swift-operator-controller-manager-c674c5965-65d6b |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" | |
openstack-operators |
multus |
ovn-operator-controller-manager-884679f54-7fq2b |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
multus |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
multus |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8" | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
multus |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-cdl58" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" | |
openstack-operators |
multus |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-cdl58" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-c7zkc" | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-9cfd8" | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-c7zkc" | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-2r2zg" | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-9cfd8" | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-2r2zg" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-74ccd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-74ccd" | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-5bjsj" | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-wsxjp" | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-wsxjp" | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-5bjsj" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-x4x6n" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-x4x6n" | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-4mwvn" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-4mwvn" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-rr22k" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
| (x5) | openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-rr22k" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" in 21.971s (21.971s including waiting). Image size: 190382026 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" in 21.516s (21.516s including waiting). Image size: 191045581 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" in 22.951s (22.951s including waiting). Image size: 191447488 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8" in 21.644s (21.644s including waiting). Image size: 191690176 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" in 21.649s (21.649s including waiting). Image size: 191263167 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" in 20.585s (20.585s including waiting). Image size: 191011789 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" in 22.485s (22.485s including waiting). Image size: 191633317 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" in 21.484s (21.484s including waiting). Image size: 189431506 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" in 21.035s (21.035s including waiting). Image size: 190114710 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" in 21.732s (21.732s including waiting). Image size: 193037461 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" in 22.765s (22.765s including waiting). Image size: 191122394 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" in 21.516s (21.516s including waiting). Image size: 191045581 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8" in 21.644s (21.644s including waiting). Image size: 191690176 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" in 20.65s (20.65s including waiting). Image size: 188906426 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" in 20.721s (20.721s including waiting). Image size: 190627813 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" in 21.732s (21.732s including waiting). Image size: 193037461 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" in 22.023s (22.023s including waiting). Image size: 192008127 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" in 20.721s (20.721s including waiting). Image size: 190627813 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" in 21.231s (21.231s including waiting). Image size: 196297190 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" in 20.585s (20.585s including waiting). Image size: 191011789 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" in 21.484s (21.484s including waiting). Image size: 189431506 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" in 21.476s (21.476s including waiting). Image size: 193632103 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" in 21.001s (21.001s including waiting). Image size: 193570760 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" in 21.231s (21.231s including waiting). Image size: 196297190 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" in 22.765s (22.765s including waiting). Image size: 191122394 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" in 21.476s (21.476s including waiting). Image size: 193632103 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" in 22.951s (22.951s including waiting). Image size: 191447488 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" in 22.485s (22.485s including waiting). Image size: 191633317 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" in 21.649s (21.649s including waiting). Image size: 191263167 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" in 20.765s (20.765s including waiting). Image size: 192133556 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 20.703s (20.703s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" in 20.65s (20.65s including waiting). Image size: 188906426 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" in 22.557s (22.557s including waiting). Image size: 195976677 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" in 22.557s (22.557s including waiting). Image size: 195976677 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" in 22.023s (22.023s including waiting). Image size: 192008127 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 20.703s (20.703s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" in 21.001s (21.001s including waiting). Image size: 193570760 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" in 20.765s (20.765s including waiting). Image size: 192133556 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" in 21.971s (21.971s including waiting). Image size: 190382026 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" in 21.035s (21.035s including waiting). Image size: 190114710 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Started |
Started container manager | |
openstack-operators |
keystone-operator-controller-manager-768b96df4c-kh9rb_8e51deaa-56cc-4015-80f1-037cd6f63261 |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-768b96df4c-kh9rb_8e51deaa-56cc-4015-80f1-037cd6f63261 became leader | |
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-s5trr_4308a5d9-ae60-4575-a9ce-f46f21ba17da |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-67ccfc9778-s5trr_4308a5d9-ae60-4575-a9ce-f46f21ba17da became leader | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Created |
Created container: manager | |
openstack-operators |
swift-operator-controller-manager-c674c5965-65d6b_72427fb1-ba80-4cf0-83d5-61f0c3fe602f |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-c674c5965-65d6b_72427fb1-ba80-4cf0-83d5-61f0c3fe602f became leader | |
openstack-operators |
multus |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
barbican-operator-controller-manager-59bc569d95-j929h_6d19df44-ad27-48ea-9bfa-821cf41fe1d3 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-59bc569d95-j929h_6d19df44-ad27-48ea-9bfa-821cf41fe1d3 became leader | |
openstack-operators |
cinder-operator-controller-manager-8d58dc466-zvf6m_25579007-014d-494a-a483-5b9f261b3f95 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-8d58dc466-zvf6m_25579007-014d-494a-a483-5b9f261b3f95 became leader | |
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-jv72h_5bd4b541-725f-4bf7-a274-c9cb62a3bc8c |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-5b9f45d989-jv72h_5bd4b541-725f-4bf7-a274-c9cb62a3bc8c became leader | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Created |
Created container: manager | |
openstack-operators |
neutron-operator-controller-manager-767865f676-r78pl_32c9bb42-0aa8-4a2d-adb9-1f2fe7d009a8 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-767865f676-r78pl_32c9bb42-0aa8-4a2d-adb9-1f2fe7d009a8 became leader | |
openstack-operators |
barbican-operator-controller-manager-59bc569d95-j929h_6d19df44-ad27-48ea-9bfa-821cf41fe1d3 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-59bc569d95-j929h_6d19df44-ad27-48ea-9bfa-821cf41fe1d3 became leader | |
openstack-operators |
swift-operator-controller-manager-c674c5965-65d6b_72427fb1-ba80-4cf0-83d5-61f0c3fe602f |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-c674c5965-65d6b_72427fb1-ba80-4cf0-83d5-61f0c3fe602f became leader | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-65d6b |
Started |
Started container manager | |
openstack-operators |
keystone-operator-controller-manager-768b96df4c-kh9rb_8e51deaa-56cc-4015-80f1-037cd6f63261 |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-768b96df4c-kh9rb_8e51deaa-56cc-4015-80f1-037cd6f63261 became leader | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Started |
Started container manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-jv72h |
Created |
Created container: manager | |
openstack-operators |
neutron-operator-controller-manager-767865f676-r78pl_32c9bb42-0aa8-4a2d-adb9-1f2fe7d009a8 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-767865f676-r78pl_32c9bb42-0aa8-4a2d-adb9-1f2fe7d009a8 became leader | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Started |
Started container manager | |
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-s5trr_4308a5d9-ae60-4575-a9ce-f46f21ba17da |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-67ccfc9778-s5trr_4308a5d9-ae60-4575-a9ce-f46f21ba17da became leader | |
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-jv72h_5bd4b541-725f-4bf7-a274-c9cb62a3bc8c |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-5b9f45d989-jv72h_5bd4b541-725f-4bf7-a274-c9cb62a3bc8c became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Created |
Created container: manager | |
openstack-operators |
multus |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-b8s4c |
Created |
Created container: manager | |
openstack-operators |
cinder-operator-controller-manager-8d58dc466-zvf6m_25579007-014d-494a-a483-5b9f261b3f95 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-8d58dc466-zvf6m_25579007-014d-494a-a483-5b9f261b3f95 became leader | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-r78pl |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-pw2xk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-s5trr |
Created |
Created container: manager | |
openstack-operators |
glance-operator-controller-manager-79df6bcc97-sq7cg_3d5d77dc-157d-4822-9946-45d8a577c0f4 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-79df6bcc97-sq7cg_3d5d77dc-157d-4822-9946-45d8a577c0f4 became leader | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-kh9rb |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-j929h |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-5znsj |
Created |
Created container: manager | |
openstack-operators |
glance-operator-controller-manager-79df6bcc97-sq7cg_3d5d77dc-157d-4822-9946-45d8a577c0f4 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-79df6bcc97-sq7cg_3d5d77dc-157d-4822-9946-45d8a577c0f4 became leader | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-zvf6m |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-sq7cg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Started |
Started container operator | |
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-ft2mk_913f7d99-bf70-4caf-babf-f9f3d14eb690 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-67dd5f86f5-ft2mk_913f7d99-bf70-4caf-babf-f9f3d14eb690 became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Started |
Started container manager | |
openstack-operators |
nova-operator-controller-manager-5d488d59fb-pw2xk_847af28f-708a-43a5-b35e-ac41fbc01aaa |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-5d488d59fb-pw2xk_847af28f-708a-43a5-b35e-ac41fbc01aaa became leader | |
openstack-operators |
ironic-operator-controller-manager-6f787dddc9-qlfpx_131d7a59-123f-409d-86c7-6b11a3e375e3 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-6f787dddc9-qlfpx_131d7a59-123f-409d-86c7-6b11a3e375e3 became leader | |
openstack-operators |
designate-operator-controller-manager-588d4d986b-lmp5n_2fdf604d-1bb5-4c7a-b517-6904e9bc9af8 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-588d4d986b-lmp5n_2fdf604d-1bb5-4c7a-b517-6904e9bc9af8 became leader | |
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-j5ggz_30ab41c6-a7ea-4ddd-a3bf-0fb383c44a86 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-d6b694c5-j5ggz_30ab41c6-a7ea-4ddd-a3bf-0fb383c44a86 became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Started |
Started container manager | |
openstack-operators |
designate-operator-controller-manager-588d4d986b-lmp5n_2fdf604d-1bb5-4c7a-b517-6904e9bc9af8 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-588d4d986b-lmp5n_2fdf604d-1bb5-4c7a-b517-6904e9bc9af8 became leader | |
openstack-operators |
ironic-operator-controller-manager-6f787dddc9-qlfpx_131d7a59-123f-409d-86c7-6b11a3e375e3 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-6f787dddc9-qlfpx_131d7a59-123f-409d-86c7-6b11a3e375e3 became leader | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-ft2mk |
Started |
Started container manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Created |
Created container: operator | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4_30f80033-d4a0-41d1-bee6-c3bbfa2dcdf1 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4_30f80033-d4a0-41d1-bee6-c3bbfa2dcdf1 became leader | |
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv_9cb24b1b-b992-4399-970a-1c51e17feca3 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv_9cb24b1b-b992-4399-970a-1c51e17feca3 became leader | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-j5ggz |
Started |
Started container manager | |
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-b8s4c_ee77b19d-da25-4979-9466-214f81dc5b9f |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-8464cc45fb-b8s4c_ee77b19d-da25-4979-9466-214f81dc5b9f became leader | |
openstack-operators |
nova-operator-controller-manager-5d488d59fb-pw2xk_847af28f-708a-43a5-b35e-ac41fbc01aaa |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-5d488d59fb-pw2xk_847af28f-708a-43a5-b35e-ac41fbc01aaa became leader | |
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv_9cb24b1b-b992-4399-970a-1c51e17feca3 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv_9cb24b1b-b992-4399-970a-1c51e17feca3 became leader | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Started |
Started container manager | |
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-b8s4c_ee77b19d-da25-4979-9466-214f81dc5b9f |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-8464cc45fb-b8s4c_ee77b19d-da25-4979-9466-214f81dc5b9f became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Started |
Started container operator | |
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-5znsj_b3da2c14-94de-4995-bc55-8d4cbe78b909 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-5c5cb9c4d7-5znsj_b3da2c14-94de-4995-bc55-8d4cbe78b909 became leader | |
openstack-operators |
placement-operator-controller-manager-5784578c99-4tjlx_80354a4e-5ad1-4c83-88be-beb04fc1eb3d |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-5784578c99-4tjlx_80354a4e-5ad1-4c83-88be-beb04fc1eb3d became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6f787dddc9-qlfpx |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Started |
Started container manager | |
openstack-operators |
manila-operator-controller-manager-55f864c847-6n7n9_661a2700-bd83-4434-97b3-f80499df7379 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-55f864c847-6n7n9_661a2700-bd83-4434-97b3-f80499df7379 became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4_30f80033-d4a0-41d1-bee6-c3bbfa2dcdf1 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-bbgx4_30f80033-d4a0-41d1-bee6-c3bbfa2dcdf1 became leader | |
openstack-operators |
ovn-operator-controller-manager-884679f54-7fq2b_77851e16-963a-428f-8286-b0763e4ee30c |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-884679f54-7fq2b_77851e16-963a-428f-8286-b0763e4ee30c became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-2pmjv |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-7fq2b |
Started |
Started container manager | |
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-ft2mk_913f7d99-bf70-4caf-babf-f9f3d14eb690 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-67dd5f86f5-ft2mk_913f7d99-bf70-4caf-babf-f9f3d14eb690 became leader | |
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-5znsj_b3da2c14-94de-4995-bc55-8d4cbe78b909 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-5c5cb9c4d7-5znsj_b3da2c14-94de-4995-bc55-8d4cbe78b909 became leader | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Started |
Started container manager | |
openstack-operators |
placement-operator-controller-manager-5784578c99-4tjlx_80354a4e-5ad1-4c83-88be-beb04fc1eb3d |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-5784578c99-4tjlx_80354a4e-5ad1-4c83-88be-beb04fc1eb3d became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-4tjlx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-lmp5n |
Created |
Created container: manager | |
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-j5ggz_30ab41c6-a7ea-4ddd-a3bf-0fb383c44a86 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-d6b694c5-j5ggz_30ab41c6-a7ea-4ddd-a3bf-0fb383c44a86 became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Created |
Created container: manager | |
openstack-operators |
ovn-operator-controller-manager-884679f54-7fq2b_77851e16-963a-428f-8286-b0763e4ee30c |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-884679f54-7fq2b_77851e16-963a-428f-8286-b0763e4ee30c became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-6n7n9 |
Started |
Started container manager | |
openstack-operators |
manila-operator-controller-manager-55f864c847-6n7n9_661a2700-bd83-4434-97b3-f80499df7379 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-55f864c847-6n7n9_661a2700-bd83-4434-97b3-f80499df7379 became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" in 4.186s (4.186s including waiting). Image size: 192852400 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Created |
Created container: manager | |
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-xmlj9_e5b6fd55-cc04-426b-b4d0-3ea7a3344cfd |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-7dd6bb94c9-xmlj9_e5b6fd55-cc04-426b-b4d0-3ea7a3344cfd became leader | |
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-xmlj9_e5b6fd55-cc04-426b-b4d0-3ea7a3344cfd |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-7dd6bb94c9-xmlj9_e5b6fd55-cc04-426b-b4d0-3ea7a3344cfd became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-xmlj9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" in 4.186s (4.186s including waiting). Image size: 192852400 bytes. | |
openstack-operators |
multus |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:5ab774ebf20af59fbaa509688a104dc74488b7cd8c4b0640d16924de8ead64fb" already present on machine | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" | |
openstack-operators |
openstack-operator-controller-manager-86bd8996f6-8hx4g_cc7a5622-a9b3-45c2-81cb-83720e2eb064 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-86bd8996f6-8hx4g_cc7a5622-a9b3-45c2-81cb-83720e2eb064 became leader | |
openstack-operators |
openstack-operator-controller-manager-86bd8996f6-8hx4g_cc7a5622-a9b3-45c2-81cb-83720e2eb064 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-86bd8996f6-8hx4g_cc7a5622-a9b3-45c2-81cb-83720e2eb064 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:5ab774ebf20af59fbaa509688a104dc74488b7cd8c4b0640d16924de8ead64fb" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-86bd8996f6-8hx4g |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" in 2.019s (2.019s including waiting). Image size: 190544999 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" in 2.019s (2.019s including waiting). Image size: 190544999 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Created |
Created container: manager | |
openstack-operators |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7_59f7968c-5844-4114-9084-893448bdffdd |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7_59f7968c-5844-4114-9084-893448bdffdd became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7 |
Started |
Started container manager | |
openstack-operators |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7_59f7968c-5844-4114-9084-893448bdffdd |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-74c4796899dzhg7_59f7968c-5844-4114-9084-893448bdffdd became leader | |
openstack |
cert-manager-certificates-trigger |
rootca-public |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-public |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-public-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-trigger |
rootca-internal |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-public" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-public" not found |
openstack |
cert-manager-certificates-key-manager |
rootca-public |
Generated |
Stored new private key in temporary Secret resource "rootca-public-jwc6n" | |
openstack |
cert-manager-certificates-request-manager |
rootca-public |
Requested |
Created new CertificateRequest resource "rootca-public-1" | |
openstack |
cert-manager-certificates-issuing |
rootca-internal |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-libvirt" not found |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
rootca-internal |
Generated |
Stored new private key in temporary Secret resource "rootca-internal-7sspq" | |
openstack |
cert-manager-certificates-request-manager |
rootca-internal |
Requested |
Created new CertificateRequest resource "rootca-internal-1" | |
openstack |
cert-manager-certificaterequests-approver |
rootca-internal-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
rootca-libvirt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-libvirt" not found |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-libvirt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
rootca-libvirt |
Generated |
Stored new private key in temporary Secret resource "rootca-libvirt-wklw8" | |
openstack |
cert-manager-certificates-request-manager |
rootca-libvirt |
Requested |
Created new CertificateRequest resource "rootca-libvirt-1" | |
openstack |
cert-manager-certificates-issuing |
rootca-libvirt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
rootca-ovn |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-ovn" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-ovn" not found |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-ovn-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-ovn |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-internal |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
rootca-ovn |
Requested |
Created new CertificateRequest resource "rootca-ovn-1" | |
openstack |
cert-manager-certificates-key-manager |
rootca-ovn |
Generated |
Stored new private key in temporary Secret resource "rootca-ovn-7lqts" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x3) | openstack |
cert-manager-issuers |
rootca-public |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificates-trigger |
rabbitmq-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
dnsmasq-dns-685c76cf85 |
SuccessfulCreate |
Created pod: dnsmasq-dns-685c76cf85-cdfrk | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-cell1-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-cell1-svc-1" | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-8476fd89bc |
SuccessfulCreate |
Created pod: dnsmasq-dns-8476fd89bc-6bm4q | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-blzf6" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
metallb-controller |
dnsmasq-dns |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-8476fd89bc to 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-685c76cf85 to 1 | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-svc-r6m7k" | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
multus |
dnsmasq-dns-8476fd89bc-6bm4q |
AddedInterface |
Add eth0 [10.128.0.173/23] from ovn-kubernetes | |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
rabbitmq |
IPAllocated |
Assigned IP ["172.17.0.85"] | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-nodes of Type *v1.Service | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-plugins-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.ServiceAccount | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-peer-discovery of Type *v1.Role | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.RoleBinding | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
replicaset-controller |
dnsmasq-dns-76849d6659 |
SuccessfulCreate |
Created pod: dnsmasq-dns-76849d6659-8tphm | |
| (x3) | openstack |
cert-manager-issuers |
rootca-libvirt |
KeyPairVerified |
Signing CA verified |
| (x2) | openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
persistence-rabbitmq-cell1-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0" | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-76849d6659 to 1 from 0 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-8476fd89bc to 0 from 1 | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.RoleBinding | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-peer-discovery of Type *v1.Role | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.ServiceAccount | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1 of Type *v1.Service | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6ff8fd9d5c to 1 from 0 | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-685c76cf85 to 0 from 1 | |
openstack |
metallb-controller |
rabbitmq-cell1 |
IPAllocated |
Assigned IP ["172.17.0.86"] | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-nodes of Type *v1.Service | |
openstack |
kubelet |
dnsmasq-dns-8476fd89bc-6bm4q |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" | |
openstack |
replicaset-controller |
dnsmasq-dns-8476fd89bc |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-8476fd89bc-6bm4q | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-erlang-cookie of Type *v1.Secret | |
openstack |
replicaset-controller |
dnsmasq-dns-685c76cf85 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-685c76cf85-cdfrk | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-685c76cf85-cdfrk |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" | |
openstack |
multus |
dnsmasq-dns-685c76cf85-cdfrk |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success | |
openstack |
replicaset-controller |
dnsmasq-dns-6ff8fd9d5c |
SuccessfulCreate |
Created pod: dnsmasq-dns-6ff8fd9d5c-qk9z4 | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-svc-4nhdh" | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" | |
openstack |
multus |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
AddedInterface |
Add eth0 [10.128.0.175/23] from ovn-kubernetes | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-76849d6659-8tphm |
AddedInterface |
Add eth0 [10.128.0.174/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Pod openstack-galera-0 in StatefulSet openstack-galera successful | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-ovn |
KeyPairVerified |
Signing CA verified |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-cell1-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-cell1-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-hm47v" | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
persistence-rabbitmq-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0" | |
openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success | |
openstack |
cert-manager-certificates-trigger |
memcached-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
persistence-rabbitmq-cell1-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-958cfe64-d1d3-4ec7-a3d8-81cbd46a10b2 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
memcached-svc |
Requested |
Created new CertificateRequest resource "memcached-svc-1" | |
openstack |
cert-manager-certificaterequests-approver |
memcached-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
memcached-svc |
Generated |
Stored new private key in temporary Secret resource "memcached-svc-nxz5q" | |
| (x3) | openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
cert-manager-certificates-issuing |
memcached-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
persistence-rabbitmq-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-3fd58476-e6c9-4799-b98f-2b7147237a93 | |
| (x2) | openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
cert-manager-certificaterequests-issuer-vault |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
mysql-db-openstack-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
ovn-metrics |
Generated |
Stored new private key in temporary Secret resource "ovn-metrics-zr8g8" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovn-metrics-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-trigger |
ovn-metrics |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
statefulset-controller |
memcached |
SuccessfulCreate |
create Pod memcached-0 in StatefulSet memcached successful | |
openstack |
cert-manager-certificates-request-manager |
ovn-metrics |
Requested |
Created new CertificateRequest resource "ovn-metrics-1" | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
mysql-db-openstack-cell1-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0" | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
mysql-db-openstack-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-634204c0-c830-46d5-93f3-06cce770b921 | |
openstack |
cert-manager-certificates-trigger |
ovncontroller-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
mysql-db-openstack-cell1-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-1e5a2d74-11f9-48e0-80c2-b9f406c2965e | |
openstack |
cert-manager-certificates-issuing |
ovn-metrics |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-nb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
ovnnorthd-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
memcached-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:0d759b31e4da88b3fa1b823ab634d982fd713e81ed648626de1d8ec40ae7cad4" | |
openstack |
multus |
memcached-0 |
AddedInterface |
Add eth0 [10.128.0.178/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-nb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-bphpn" | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-nb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-nb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1" | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-sb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
neutron-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ovncontroller-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovncontroller-ovndbs-vx5qt" | |
openstack |
multus |
rabbitmq-cell1-server-0 |
AddedInterface |
Add eth0 [10.128.0.176/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-nb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-sb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-zmgzk" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ovnnorthd-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-99n7x" | |
openstack |
cert-manager-certificates-key-manager |
neutron-ovndbs |
Generated |
Stored new private key in temporary Secret resource "neutron-ovndbs-m6f2q" | |
openstack |
cert-manager-certificates-request-manager |
ovnnorthd-ovndbs |
Requested |
Created new CertificateRequest resource "ovnnorthd-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-sb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-approver |
ovncontroller-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovncontroller-ovndbs |
Requested |
Created new CertificateRequest resource "ovncontroller-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0" | |
| (x2) | openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovnnorthd-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
neutron-ovndbs |
Requested |
Created new CertificateRequest resource "neutron-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-sb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
neutron-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-efa4a51f-71e2-4b74-be2f-ade92b38c81c | |
openstack |
cert-manager-certificates-issuing |
ovncontroller-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
daemonset-controller |
ovn-controller |
SuccessfulCreate |
Created pod: ovn-controller-m68fw | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-sb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-issuing |
neutron-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
daemonset-controller |
ovn-controller-ovs |
SuccessfulCreate |
Created pod: ovn-controller-ovs-sl66q | |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-issuing |
ovnnorthd-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:2087a09e7ea9f1dbadd433366bb46cc93dd5460ac9606b65f430460f4c2ee18d" | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0" | |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-0a5fe18d-7bfe-4749-8084-375f18d4d707 | |
openstack |
multus |
openstack-cell1-galera-0 |
AddedInterface |
Add eth0 [10.128.0.180/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-685c76cf85-cdfrk |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-8476fd89bc-6bm4q |
Created |
Created container: init | |
openstack |
kubelet |
memcached-0 |
Started |
Started container memcached | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" | |
openstack |
multus |
openstack-galera-0 |
AddedInterface |
Add eth0 [10.128.0.179/23] from ovn-kubernetes | |
openstack |
kubelet |
openstack-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" | |
openstack |
multus |
ovn-controller-m68fw |
AddedInterface |
Add eth0 [10.128.0.182/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-sl66q |
AddedInterface |
Add eth0 [10.128.0.183/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-sl66q |
AddedInterface |
Add datacentre [] from openstack/datacentre | |
openstack |
kubelet |
memcached-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:0d759b31e4da88b3fa1b823ab634d982fd713e81ed648626de1d8ec40ae7cad4" in 18.754s (18.754s including waiting). Image size: 277692612 bytes. | |
openstack |
kubelet |
dnsmasq-dns-685c76cf85-cdfrk |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" in 24.666s (24.666s including waiting). Image size: 679076174 bytes. | |
openstack |
kubelet |
dnsmasq-dns-685c76cf85-cdfrk |
Created |
Created container: init | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:2087a09e7ea9f1dbadd433366bb46cc93dd5460ac9606b65f430460f4c2ee18d" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-8476fd89bc-6bm4q |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" in 23.161s (23.161s including waiting). Image size: 679076174 bytes. | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" in 24.084s (24.084s including waiting). Image size: 679076174 bytes. | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Started |
Started container init | |
openstack |
kubelet |
memcached-0 |
Created |
Created container: memcached | |
openstack |
kubelet |
dnsmasq-dns-8476fd89bc-6bm4q |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" in 24.537s (24.537s including waiting). Image size: 679076174 bytes. | |
openstack |
multus |
rabbitmq-server-0 |
AddedInterface |
Add eth0 [10.128.0.177/23] from ovn-kubernetes | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:2087a09e7ea9f1dbadd433366bb46cc93dd5460ac9606b65f430460f4c2ee18d" in 12.458s (12.458s including waiting). Image size: 304732739 bytes. | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add eth0 [10.128.0.184/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-sl66q |
AddedInterface |
Add tenant [172.19.0.30/24] from openstack/tenant | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add eth0 [10.128.0.181/23] from ovn-kubernetes | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add internalapi [172.17.0.30/24] from openstack/internalapi | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:943db2d546cbc78f663edcd102c478b71d755a66f99d24fea1b4e628c4125104" | |
openstack |
kubelet |
ovn-controller-m68fw |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:79213c923a25c7aa65998a66c3c2c2fbd8973f837cfb94f867e567cd71614af0" | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bba31d7d170c92451c1d62346da1057e9c0e941a074a32cc54219cb79a4ea24a" | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add internalapi [172.17.0.31/24] from openstack/internalapi | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:ac71d8f4475d08f0a40a993cf5f083aead99232c2d5d8cd9514d63a345d0c128" | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
ovn-controller-m68fw |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:79213c923a25c7aa65998a66c3c2c2fbd8973f837cfb94f867e567cd71614af0" in 7.632s (7.632s including waiting). Image size: 346792900 bytes. | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:943db2d546cbc78f663edcd102c478b71d755a66f99d24fea1b4e628c4125104" in 7.355s (7.355s including waiting). Image size: 324510219 bytes. | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" in 8.636s (8.636s including waiting). Image size: 429677374 bytes. | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" in 7.972s (7.972s including waiting). Image size: 429677374 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:ac71d8f4475d08f0a40a993cf5f083aead99232c2d5d8cd9514d63a345d0c128" in 6.441s (6.441s including waiting). Image size: 346963744 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bba31d7d170c92451c1d62346da1057e9c0e941a074a32cc54219cb79a4ea24a" in 5.784s (5.784s including waiting). Image size: 346963744 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: ovsdbserver-nb | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:943db2d546cbc78f663edcd102c478b71d755a66f99d24fea1b4e628c4125104" already present on machine | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Started |
Started container ovsdb-server-init | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Created |
Created container: ovsdb-server-init | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: ovsdbserver-sb | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container ovsdbserver-nb | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
ovn-controller-m68fw |
Started |
Started container ovn-controller | |
openstack |
kubelet |
ovn-controller-m68fw |
Created |
Created container: ovn-controller | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container ovsdbserver-sb | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Created |
Created container: ovs-vswitchd | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Created |
Created container: ovsdb-server | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Started |
Started container ovs-vswitchd | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:943db2d546cbc78f663edcd102c478b71d755a66f99d24fea1b4e628c4125104" already present on machine | |
openstack |
kubelet |
ovn-controller-ovs-sl66q |
Started |
Started container ovsdb-server | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-76849d6659 to 0 from 1 | |
openstack |
kubelet |
dnsmasq-dns-76849d6659-8tphm |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-76849d6659 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-76849d6659-8tphm | |
openstack |
metallb-controller |
swift-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-7bb8ffc699 to 1 | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
replicaset-controller |
dnsmasq-dns-7bb8ffc699 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7bb8ffc699-2qz2r | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 6.908s (6.908s including waiting). Image size: 165206333 bytes. | |
openstack |
cert-manager-certificaterequests-approver |
swift-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
swift-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container galera | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
cert-manager-certificates-key-manager |
swift-internal-svc |
Generated |
Stored new private key in temporary Secret resource "swift-internal-svc-f6qtf" | |
openstack |
cert-manager-certificates-request-manager |
swift-internal-svc |
Requested |
Created new CertificateRequest resource "swift-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
swift-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 7.032s (7.032s including waiting). Image size: 165206333 bytes. | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
swift-swift-storage-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0" | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Pod swift-storage-0 in StatefulSet swift-storage successful | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container galera | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
swift-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Created |
Created container: init | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
swift-swift-storage-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-9d5fb6d2-3b43-49ba-ba81-25e6cdfebfd2 | |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
multus |
dnsmasq-dns-7bb8ffc699-2qz2r |
AddedInterface |
Add eth0 [10.128.0.185/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
swift-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
swift-public-svc |
Generated |
Stored new private key in temporary Secret resource "swift-public-svc-cwm88" | |
openstack |
cert-manager-certificates-request-manager |
swift-public-svc |
Requested |
Created new CertificateRequest resource "swift-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
swift-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
job-controller |
swift-ring-rebalance |
SuccessfulCreate |
Created pod: swift-ring-rebalance-l8hw9 | |
openstack |
daemonset-controller |
ovn-controller-metrics |
SuccessfulCreate |
Created pod: ovn-controller-metrics-7dlz8 | |
openstack |
cert-manager-certificates-issuing |
swift-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
swift-public-route |
Requested |
Created new CertificateRequest resource "swift-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
swift-public-route |
Generated |
Stored new private key in temporary Secret resource "swift-public-route-d7wpk" | |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
dnsmasq-dns-6796764987 |
SuccessfulCreate |
Created pod: dnsmasq-dns-6796764987-gtg4x | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-7bb8ffc699 to 0 from 1 | |
openstack |
multus |
swift-ring-rebalance-l8hw9 |
AddedInterface |
Add eth0 [10.128.0.187/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-6796764987 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6796764987-gtg4x | |
openstack |
replicaset-controller |
dnsmasq-dns-7bb8ffc699 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7bb8ffc699-2qz2r | |
openstack |
replicaset-controller |
dnsmasq-dns-5bf8b865dc |
SuccessfulCreate |
Created pod: dnsmasq-dns-5bf8b865dc-vtxcj | |
openstack |
statefulset-controller |
ovn-northd |
SuccessfulCreate |
create Pod ovn-northd-0 in StatefulSet ovn-northd successful | |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
ovn-controller-metrics-7dlz8 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine | |
openstack |
kubelet |
swift-ring-rebalance-l8hw9 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:6c3eb966650a7a98feb4ddb31e1bdba1095b0c62e349196aca6a423681d7e5fb" | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1 of Type *v1.Service |
openstack |
kubelet |
dnsmasq-dns-7bb8ffc699-2qz2r |
Killing |
Stopping container dnsmasq-dns | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-server of Type *v1.StatefulSet |
openstack |
multus |
ovn-controller-metrics-7dlz8 |
AddedInterface |
Add eth0 [10.128.0.188/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-5bf8b865dc-vtxcj |
AddedInterface |
Add eth0 [10.128.0.190/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Started |
Started container init | |
openstack |
kubelet |
ovn-northd-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3d4aa78bc0932fd39a377beb5a649e47832c0de33a62c413776de2f9de31763e" | |
openstack |
multus |
ovn-northd-0 |
AddedInterface |
Add eth0 [10.128.0.191/23] from ovn-kubernetes | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-server of Type *v1.StatefulSet |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Created |
Created container: init | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq of Type *v1.Service |
openstack |
kubelet |
ovn-controller-metrics-7dlz8 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovn-controller-metrics-7dlz8 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
swift-ring-rebalance-l8hw9 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:6c3eb966650a7a98feb4ddb31e1bdba1095b0c62e349196aca6a423681d7e5fb" in 4.811s (4.812s including waiting). Image size: 500200203 bytes. | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3d4aa78bc0932fd39a377beb5a649e47832c0de33a62c413776de2f9de31763e" in 3.867s (3.867s including waiting). Image size: 346960837 bytes. | |
openstack |
job-controller |
glance-fc3e-account-create-update |
SuccessfulCreate |
Created pod: glance-fc3e-account-create-update-btzjb | |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
swift-ring-rebalance-l8hw9 |
Created |
Created container: swift-ring-rebalance | |
openstack |
kubelet |
swift-ring-rebalance-l8hw9 |
Started |
Started container swift-ring-rebalance | |
| (x5) | openstack |
kubelet |
swift-storage-0 |
FailedMount |
MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found |
openstack |
job-controller |
glance-db-create |
SuccessfulCreate |
Created pod: glance-db-create-nj4vf | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container ovn-northd | |
openstack |
kubelet |
glance-db-create-nj4vf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
multus |
glance-fc3e-account-create-update-btzjb |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openstack |
multus |
glance-db-create-nj4vf |
AddedInterface |
Add eth0 [10.128.0.193/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-fc3e-account-create-update-btzjb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
glance-fc3e-account-create-update-btzjb |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
glance-fc3e-account-create-update-btzjb |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
glance-db-create-nj4vf |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
glance-db-create-nj4vf |
Created |
Created container: mariadb-database-create | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-k88tf | |
openstack |
metal3-baremetal-controller |
bmh0 |
Registered |
Registered new host | |
openstack |
kubelet |
root-account-create-update-k88tf |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-k88tf |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-k88tf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
metal3-baremetal-controller |
bmh1 |
Registered |
Registered new host | |
openstack |
multus |
root-account-create-update-k88tf |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openstack |
job-controller |
glance-fc3e-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
glance-db-create |
Completed |
Job completed | |
openstack |
kubelet |
dnsmasq-dns-6ff8fd9d5c-qk9z4 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-6ff8fd9d5c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6ff8fd9d5c-qk9z4 | |
openstack |
job-controller |
glance-db-sync |
SuccessfulCreate |
Created pod: glance-db-sync-zxw2c | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
placement-f95e-account-create-update |
SuccessfulCreate |
Created pod: placement-f95e-account-create-update-gph65 | |
openstack |
job-controller |
keystone-16fb-account-create-update |
SuccessfulCreate |
Created pod: keystone-16fb-account-create-update-8cp5c | |
openstack |
job-controller |
placement-db-create |
SuccessfulCreate |
Created pod: placement-db-create-5bqq7 | |
openstack |
job-controller |
keystone-db-create |
SuccessfulCreate |
Created pod: keystone-db-create-wpbkz | |
openstack |
multus |
glance-db-sync-zxw2c |
AddedInterface |
Add eth0 [10.128.0.195/23] from ovn-kubernetes | |
openstack |
multus |
glance-db-sync-zxw2c |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
multus |
placement-db-create-5bqq7 |
AddedInterface |
Add eth0 [10.128.0.198/23] from ovn-kubernetes | |
openstack |
multus |
keystone-16fb-account-create-update-8cp5c |
AddedInterface |
Add eth0 [10.128.0.197/23] from ovn-kubernetes | |
openstack |
multus |
keystone-db-create-wpbkz |
AddedInterface |
Add eth0 [10.128.0.196/23] from ovn-kubernetes | |
openstack |
multus |
placement-f95e-account-create-update-gph65 |
AddedInterface |
Add eth0 [10.128.0.199/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-db-create-wpbkz |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
keystone-16fb-account-create-update-8cp5c |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
placement-db-create-5bqq7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
glance-db-sync-zxw2c |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" | |
openstack |
kubelet |
keystone-db-create-wpbkz |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
placement-f95e-account-create-update-gph65 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
keystone-db-create-wpbkz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
keystone-16fb-account-create-update-8cp5c |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
keystone-16fb-account-create-update-8cp5c |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: rabbitmq | |
openstack |
multus |
swift-storage-0 |
AddedInterface |
Add eth0 [10.128.0.186/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-create-5bqq7 |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
placement-db-create-5bqq7 |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:f919023e9754d0d94b3fa3e7f571e6d22330ad3cdbb17b20d6143d2581b49ef1" | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:2087a09e7ea9f1dbadd433366bb46cc93dd5460ac9606b65f430460f4c2ee18d" already present on machine | |
openstack |
kubelet |
placement-f95e-account-create-update-gph65 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
placement-f95e-account-create-update-gph65 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:2087a09e7ea9f1dbadd433366bb46cc93dd5460ac9606b65f430460f4c2ee18d" already present on machine | |
openstack |
job-controller |
swift-ring-rebalance |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-replicator | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:f919023e9754d0d94b3fa3e7f571e6d22330ad3cdbb17b20d6143d2581b49ef1" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:f919023e9754d0d94b3fa3e7f571e6d22330ad3cdbb17b20d6143d2581b49ef1" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:f919023e9754d0d94b3fa3e7f571e6d22330ad3cdbb17b20d6143d2581b49ef1" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:f919023e9754d0d94b3fa3e7f571e6d22330ad3cdbb17b20d6143d2581b49ef1" in 1.882s (1.882s including waiting). Image size: 445167617 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-replicator | |
openstack |
job-controller |
placement-f95e-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:37da219a7d5254e5fa6cac571f99d8ca7c600d3243b68ffb282a6c70ff8b3ff2" | |
openstack |
job-controller |
keystone-db-create |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-reaper | |
openstack |
job-controller |
placement-db-create |
Completed |
Job completed | |
openstack |
job-controller |
keystone-16fb-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-server | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-replicator | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-dh5fs | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:37da219a7d5254e5fa6cac571f99d8ca7c600d3243b68ffb282a6c70ff8b3ff2" in 1.082s (1.082s including waiting). Image size: 445183491 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:37da219a7d5254e5fa6cac571f99d8ca7c600d3243b68ffb282a6c70ff8b3ff2" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-replicator | |
openstack |
kubelet |
root-account-create-update-dh5fs |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
root-account-create-update-dh5fs |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-dh5fs |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
root-account-create-update-dh5fs |
AddedInterface |
Add eth0 [10.128.0.200/23] from ovn-kubernetes | |
openstack |
job-controller |
ovn-controller-m68fw-config |
SuccessfulCreate |
Created pod: ovn-controller-m68fw-config-f49xm | |
openstack |
rabbitmq-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-server-0 |
Created |
Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered | |
openstack |
rabbitmq-cell1-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-cell1-server-0 |
Created |
Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered | |
| (x2) | openstack |
kubelet |
ovn-controller-m68fw |
Unhealthy |
Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status |
openstack |
metal3-baremetal-controller |
bmh1 |
ProfileSet |
Hardware profile set: unknown | |
openstack |
metal3-baremetal-controller |
bmh0 |
ProfileSet |
Hardware profile set: unknown | |
openstack |
metal3-baremetal-controller |
bmh1 |
PowerOff |
Host soft powered off | |
openstack |
metal3-baremetal-controller |
bmh0 |
PowerOff |
Host soft powered off | |
openstack |
kubelet |
glance-db-sync-zxw2c |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" in 15.021s (15.021s including waiting). Image size: 983055212 bytes. | |
openstack |
metal3-baremetal-controller |
bmh1 |
BMCAccessValidated |
Verified access to BMC | |
openstack |
metal3-baremetal-controller |
bmh0 |
BMCAccessValidated |
Verified access to BMC | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
ovn-controller-m68fw-config-f49xm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:79213c923a25c7aa65998a66c3c2c2fbd8973f837cfb94f867e567cd71614af0" already present on machine | |
openstack |
kubelet |
ovn-controller-m68fw-config-f49xm |
Created |
Created container: ovn-config | |
openstack |
kubelet |
ovn-controller-m68fw-config-f49xm |
Started |
Started container ovn-config | |
openstack |
multus |
ovn-controller-m68fw-config-f49xm |
AddedInterface |
Add eth0 [10.128.0.201/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-sync-zxw2c |
Started |
Started container glance-db-sync | |
openstack |
kubelet |
glance-db-sync-zxw2c |
Created |
Created container: glance-db-sync | |
openstack |
replicaset-controller |
dnsmasq-dns-9748bd58f |
SuccessfulCreate |
Created pod: dnsmasq-dns-9748bd58f-s2fbq | |
openstack |
metallb-speaker |
rabbitmq-cell1 |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
multus |
dnsmasq-dns-9748bd58f-s2fbq |
AddedInterface |
Add eth0 [10.128.0.202/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
job-controller |
ovn-controller-m68fw-config |
SuccessfulCreate |
Created pod: ovn-controller-m68fw-config-tzm6j | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Created |
Created container: dnsmasq-dns | |
openstack |
job-controller |
cinder-3735-account-create-update |
SuccessfulCreate |
Created pod: cinder-3735-account-create-update-59xbx | |
openstack |
job-controller |
cinder-db-create |
SuccessfulCreate |
Created pod: cinder-db-create-8nppp | |
openstack |
job-controller |
ovn-controller-m68fw-config |
Completed |
Job completed | |
openstack |
metallb-speaker |
rabbitmq |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
keystone-db-sync |
SuccessfulCreate |
Created pod: keystone-db-sync-vk8gz | |
openstack |
multus |
ovn-controller-m68fw-config-tzm6j |
AddedInterface |
Add eth0 [10.128.0.204/23] from ovn-kubernetes | |
openstack |
multus |
cinder-3735-account-create-update-59xbx |
AddedInterface |
Add eth0 [10.128.0.205/23] from ovn-kubernetes | |
openstack |
job-controller |
neutron-db-create |
SuccessfulCreate |
Created pod: neutron-db-create-wjhhn | |
openstack |
multus |
cinder-db-create-8nppp |
AddedInterface |
Add eth0 [10.128.0.203/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-db-create-8nppp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
job-controller |
neutron-886c-account-create-update |
SuccessfulCreate |
Created pod: neutron-886c-account-create-update-24ntn | |
openstack |
multus |
neutron-db-create-wjhhn |
AddedInterface |
Add eth0 [10.128.0.206/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-3735-account-create-update-59xbx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
ovn-controller-m68fw-config-tzm6j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:79213c923a25c7aa65998a66c3c2c2fbd8973f837cfb94f867e567cd71614af0" already present on machine | |
openstack |
kubelet |
ovn-controller-m68fw-config-tzm6j |
Created |
Created container: ovn-config | |
openstack |
kubelet |
ovn-controller-m68fw-config-tzm6j |
Started |
Started container ovn-config | |
openstack |
kubelet |
cinder-3735-account-create-update-59xbx |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
keystone-db-sync-vk8gz |
AddedInterface |
Add eth0 [10.128.0.207/23] from ovn-kubernetes | |
openstack |
multus |
neutron-886c-account-create-update-24ntn |
AddedInterface |
Add eth0 [10.128.0.208/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-886c-account-create-update-24ntn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
keystone-db-sync-vk8gz |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" | |
openstack |
kubelet |
neutron-db-create-wjhhn |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
neutron-db-create-wjhhn |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
neutron-db-create-wjhhn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
cinder-3735-account-create-update-59xbx |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
neutron-886c-account-create-update-24ntn |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
cinder-db-create-8nppp |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
cinder-db-create-8nppp |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
neutron-886c-account-create-update-24ntn |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
keystone-db-sync-vk8gz |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" in 5.089s (5.089s including waiting). Image size: 520208900 bytes. | |
openstack |
kubelet |
keystone-db-sync-vk8gz |
Created |
Created container: keystone-db-sync | |
openstack |
kubelet |
keystone-db-sync-vk8gz |
Started |
Started container keystone-db-sync | |
openstack |
job-controller |
neutron-886c-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
neutron-db-create |
Completed |
Job completed | |
openstack |
replicaset-controller |
dnsmasq-dns-5bf8b865dc |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5bf8b865dc-vtxcj | |
openstack |
job-controller |
cinder-3735-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
cinder-db-create |
Completed |
Job completed | |
openstack |
kubelet |
dnsmasq-dns-5bf8b865dc-vtxcj |
Killing |
Stopping container dnsmasq-dns | |
openstack |
job-controller |
ovn-controller-m68fw-config |
Completed |
Job completed | |
openstack |
replicaset-controller |
dnsmasq-dns-86659cf465 |
SuccessfulCreate |
Created pod: dnsmasq-dns-86659cf465-r6c25 | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
replicaset-controller |
dnsmasq-dns-86659cf465 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-86659cf465-r6c25 | |
openstack |
job-controller |
glance-db-sync |
Completed |
Job completed | |
openstack |
multus |
dnsmasq-dns-86659cf465-r6c25 |
AddedInterface |
Add eth0 [10.128.0.209/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-59697cf549 |
SuccessfulCreate |
Created pod: dnsmasq-dns-59697cf549-dzw8p | |
openstack |
metallb-controller |
glance-default-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
kubelet |
dnsmasq-dns-86659cf465-r6c25 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
multus |
dnsmasq-dns-59697cf549-dzw8p |
AddedInterface |
Add eth0 [10.128.0.210/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-59697cf549 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-59697cf549-dzw8p | |
openstack |
kubelet |
dnsmasq-dns-86659cf465-r6c25 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-86659cf465-r6c25 |
Created |
Created container: init | |
openstack |
replicaset-controller |
dnsmasq-dns-85f88f897 |
SuccessfulCreate |
Created pod: dnsmasq-dns-85f88f897-5c5kd | |
openstack |
cert-manager-certificates-trigger |
glance-default-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc |
SuccessfulCreate |
Created pod: edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 | |
openstack |
kubelet |
dnsmasq-dns-59697cf549-dzw8p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
deployment-controller |
edpm-a-provisionserver-openstackprovisionserver |
ScalingReplicaSet |
Scaled up replica set edpm-a-provisionserver-openstackprovisionserver-7544578cbc to 1 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-svc |
Requested |
Created new CertificateRequest resource "glance-default-public-svc-1" | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-svc-5mgfx" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-59697cf549-dzw8p |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-59697cf549-dzw8p |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:894ce79b38510973ca610423cc34a7383b7761b6ceb47d18637daffaa93336f7" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
glance-default-internal-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-internal-svc-n6m7g" | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
glance-default-internal-svc |
Requested |
Created new CertificateRequest resource "glance-default-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
glance-default-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
dnsmasq-dns-85f88f897-5c5kd |
AddedInterface |
Add eth0 [10.128.0.211/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-route |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-route-frkvj" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-route |
Requested |
Created new CertificateRequest resource "glance-default-public-route-1" | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
job-controller |
keystone-db-sync |
Completed |
Job completed | |
default |
endpoint-controller |
keystone-internal |
FailedToCreateEndpoint |
Failed to create endpoint for service openstack/keystone-internal: endpoints "keystone-internal" already exists | |
openstack |
statefulset-controller |
glance-3a5fd-default-external-api |
SuccessfulCreate |
create Claim glance-glance-3a5fd-default-external-api-0 Pod glance-3a5fd-default-external-api-0 in StatefulSet glance-3a5fd-default-external-api success | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
keystone-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
persistentvolume-controller |
glance-glance-3a5fd-default-external-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
replicaset-controller |
dnsmasq-dns-85f88f897 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-85f88f897-5c5kd | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-hlqwd | |
openstack |
persistentvolume-controller |
glance-glance-3a5fd-default-internal-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
replicaset-controller |
dnsmasq-dns-d8f46bbdf |
SuccessfulCreate |
Created pod: dnsmasq-dns-d8f46bbdf-cnrwt | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
glance-glance-3a5fd-default-external-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-3a5fd-default-external-api-0" | |
openstack |
statefulset-controller |
glance-3a5fd-default-internal-api |
SuccessfulCreate |
create Claim glance-glance-3a5fd-default-internal-api-0 Pod glance-3a5fd-default-internal-api-0 in StatefulSet glance-3a5fd-default-internal-api success | |
openstack |
job-controller |
cinder-7ba05-db-sync |
SuccessfulCreate |
Created pod: cinder-7ba05-db-sync-jdc2m | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
metallb-controller |
placement-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
cert-manager-certificates-trigger |
keystone-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
dnsmasq-dns-d8f46bbdf |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-d8f46bbdf-cnrwt | |
| (x2) | openstack |
persistentvolume-controller |
glance-glance-3a5fd-default-external-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
job-controller |
neutron-db-sync |
SuccessfulCreate |
Created pod: neutron-db-sync-hbzpf | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
persistentvolume-controller |
glance-glance-3a5fd-default-internal-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
replicaset-controller |
dnsmasq-dns-7cb6bf676c |
SuccessfulCreate |
Created pod: dnsmasq-dns-7cb6bf676c-xlvsw | |
openstack |
job-controller |
placement-db-sync |
SuccessfulCreate |
Created pod: placement-db-sync-2flmr | |
openstack |
cert-manager-certificaterequests-approver |
keystone-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-request-manager |
keystone-internal-svc |
Requested |
Created new CertificateRequest resource "keystone-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
keystone-internal-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-internal-svc-hst9v" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-85f88f897-5c5kd |
Killing |
Stopping container dnsmasq-dns | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-route |
Requested |
Created new CertificateRequest resource "keystone-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
glance-glance-3a5fd-default-internal-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-3a5fd-default-internal-api-0" | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-public-svc-d4tvc" | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-svc |
Requested |
Created new CertificateRequest resource "keystone-public-svc-1" | |
openstack |
cert-manager-certificates-trigger |
keystone-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-route |
Generated |
Stored new private key in temporary Secret resource "keystone-public-route-c6q2g" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
keystone-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
glance-glance-3a5fd-default-external-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-430873fc-8a8f-4afc-91e0-5a0e7c55256f | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
keystone-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
keystone-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
keystone-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-c6dbd8b78-6p8rh_c3dd097f-d49a-40ad-8f1f-bc5522ac2626 |
glance-glance-3a5fd-default-internal-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-efcc7bfe-2396-4399-97dc-5dbf9ab97eba | |
openstack |
cert-manager-certificates-trigger |
placement-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
neutron-db-sync-hbzpf |
Created |
Created container: neutron-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-7cb6bf676c-xlvsw |
AddedInterface |
Add eth0 [10.128.0.217/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
placement-internal-svc |
Generated |
Stored new private key in temporary Secret resource "placement-internal-svc-9kz98" | |
openstack |
cert-manager-certificaterequests-approver |
placement-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-internal-svc |
Requested |
Created new CertificateRequest resource "placement-internal-svc-1" | |
openstack |
kubelet |
neutron-db-sync-hbzpf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
multus |
neutron-db-sync-hbzpf |
AddedInterface |
Add eth0 [10.128.0.215/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
keystone-bootstrap-hlqwd |
AddedInterface |
Add eth0 [10.128.0.213/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
placement-db-sync-2flmr |
AddedInterface |
Add eth0 [10.128.0.216/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-d8f46bbdf-cnrwt |
Created |
Created container: init | |
openstack |
kubelet |
placement-db-sync-2flmr |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:b8a5d052890fb9cefa333baf10b607add227ed5d79aa108b576a97b21e89327a" | |
openstack |
kubelet |
keystone-bootstrap-hlqwd |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
keystone-bootstrap-hlqwd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" already present on machine | |
openstack |
multus |
dnsmasq-dns-d8f46bbdf-cnrwt |
AddedInterface |
Add eth0 [10.128.0.212/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
placement-public-svc |
Requested |
Created new CertificateRequest resource "placement-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
keystone-bootstrap-hlqwd |
Started |
Started container keystone-bootstrap | |
openstack |
cert-manager-certificates-key-manager |
placement-public-svc |
Generated |
Stored new private key in temporary Secret resource "placement-public-svc-fs5hb" | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Created |
Created container: init | |
openstack |
kubelet |
cinder-7ba05-db-sync-jdc2m |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" | |
openstack |
multus |
cinder-7ba05-db-sync-jdc2m |
AddedInterface |
Add eth0 [10.128.0.214/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
placement-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
neutron-db-sync-hbzpf |
Started |
Started container neutron-db-sync | |
openstack |
cert-manager-certificates-trigger |
placement-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
placement-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-d8f46bbdf-cnrwt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-d8f46bbdf-cnrwt |
Started |
Started container init | |
openstack |
cert-manager-certificates-issuing |
placement-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
placement-public-route |
Generated |
Stored new private key in temporary Secret resource "placement-public-route-btwx8" | |
openstack |
cert-manager-certificates-request-manager |
placement-public-route |
Requested |
Created new CertificateRequest resource "placement-public-route-1" | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificates-issuing |
placement-public-route |
Issuing |
The certificate has been successfully issued | |
| (x25) | openstack |
metallb-speaker |
dnsmasq-dns |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
multus |
glance-3a5fd-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.220/23] from ovn-kubernetes | |
openstack |
multus |
glance-3a5fd-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
multus |
glance-3a5fd-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.221/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
multus |
glance-3a5fd-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
placement-db-sync-2flmr |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:b8a5d052890fb9cefa333baf10b607add227ed5d79aa108b576a97b21e89327a" in 6.608s (6.608s including waiting). Image size: 472783568 bytes. | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
placement-db-sync-2flmr |
Created |
Created container: placement-db-sync | |
openstack |
kubelet |
placement-db-sync-2flmr |
Started |
Started container placement-db-sync | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-xl426 | |
openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-9748bd58f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-9748bd58f-s2fbq | |
openstack |
replicaset-controller |
dnsmasq-dns-6ddd7f485 |
SuccessfulCreate |
Created pod: dnsmasq-dns-6ddd7f485-2r6bg | |
openstack |
deployment-controller |
edpm-b-provisionserver-openstackprovisionserver |
ScalingReplicaSet |
Scaled up replica set edpm-b-provisionserver-openstackprovisionserver-5dcffdb788 to 1 | |
openstack |
replicaset-controller |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788 |
SuccessfulCreate |
Created pod: edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-9748bd58f-s2fbq |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.202:5353: connect: connection refused |
openstack |
kubelet |
cinder-7ba05-db-sync-jdc2m |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" in 25.67s (25.67s including waiting). Image size: 1161166113 bytes. | |
openstack |
kubelet |
keystone-bootstrap-xl426 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Pulled |
Container image "quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:894ce79b38510973ca610423cc34a7383b7761b6ceb47d18637daffaa93336f7" already present on machine | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Created |
Created container: init | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:894ce79b38510973ca610423cc34a7383b7761b6ceb47d18637daffaa93336f7" in 34.193s (34.193s including waiting). Image size: 964857828 bytes. | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Started |
Started container init | |
openstack |
kubelet |
keystone-bootstrap-xl426 |
Started |
Started container keystone-bootstrap | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Created |
Created container: init | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
keystone-bootstrap-xl426 |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Created |
Created container: init | |
openstack |
multus |
keystone-bootstrap-xl426 |
AddedInterface |
Add eth0 [10.128.0.222/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-6ddd7f485-2r6bg |
AddedInterface |
Add eth0 [10.128.0.223/23] from ovn-kubernetes | |
openstack |
job-controller |
placement-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
cinder-7ba05-db-sync-jdc2m |
Created |
Created container: cinder-7ba05-db-sync | |
openstack |
kubelet |
cinder-7ba05-db-sync-jdc2m |
Started |
Started container cinder-7ba05-db-sync | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Started |
Started container dnsmasq-dns | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-687479ff9d to 1 | |
openstack |
replicaset-controller |
placement-687479ff9d |
SuccessfulCreate |
Created pod: placement-687479ff9d-8shw8 | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Pulling |
Pulling image "registry.redhat.io/ubi9/httpd-24@sha256:e0697f36760789183fefc807dafa3bfeb4098725f923eb9a8f034725a01fbf9f" | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Pulling |
Pulling image "registry.redhat.io/ubi9/httpd-24@sha256:e0697f36760789183fefc807dafa3bfeb4098725f923eb9a8f034725a01fbf9f" | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:b8a5d052890fb9cefa333baf10b607add227ed5d79aa108b576a97b21e89327a" already present on machine | |
openstack |
multus |
placement-687479ff9d-8shw8 |
AddedInterface |
Add eth0 [10.128.0.224/23] from ovn-kubernetes | |
openstack |
job-controller |
neutron-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Started |
Started container placement-log | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Started |
Started container placement-api | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:b8a5d052890fb9cefa333baf10b607add227ed5d79aa108b576a97b21e89327a" already present on machine | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
neutron-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Killing |
Stopping container dnsmasq-dns | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-85f97d8d64 to 1 | |
openstack |
replicaset-controller |
dnsmasq-dns-6ddd7f485 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6ddd7f485-2r6bg | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Created |
Created container: placement-log | |
openstack |
replicaset-controller |
dnsmasq-dns-849fd5d677 |
SuccessfulCreate |
Created pod: dnsmasq-dns-849fd5d677-sdj8j | |
openstack |
cert-manager-certificates-trigger |
neutron-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
neutron-85f97d8d64 |
SuccessfulCreate |
Created pod: neutron-85f97d8d64-dfwgh | |
openstack |
deployment-controller |
keystone |
ScalingReplicaSet |
Scaled up replica set keystone-6b44d66bc9 to 1 | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
replicaset-controller |
keystone-6b44d66bc9 |
SuccessfulCreate |
Created pod: keystone-6b44d66bc9-5zxbb | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
keystone-6b44d66bc9-5zxbb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
neutron-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Created |
Created container: init | |
openstack |
multus |
keystone-6b44d66bc9-5zxbb |
AddedInterface |
Add eth0 [10.128.0.227/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
neutron-internal-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-internal-svc-mqw8p" | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificates-request-manager |
neutron-internal-svc |
Requested |
Created new CertificateRequest resource "neutron-internal-svc-1" | |
openstack |
multus |
dnsmasq-dns-849fd5d677-sdj8j |
AddedInterface |
Add eth0 [10.128.0.225/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
multus |
neutron-85f97d8d64-dfwgh |
AddedInterface |
Add internalapi [172.17.0.32/24] from openstack/internalapi | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-7cd95f9d78 to 1 | |
openstack |
multus |
neutron-85f97d8d64-dfwgh |
AddedInterface |
Add eth0 [10.128.0.226/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
neutron-7cd95f9d78 |
SuccessfulCreate |
Created pod: neutron-7cd95f9d78-s2fkv | |
openstack |
kubelet |
keystone-6b44d66bc9-5zxbb |
Started |
Started container keystone-api | |
openstack |
cert-manager-certificates-trigger |
neutron-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
neutron-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
metallb-controller |
cinder-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Started |
Started container neutron-httpd | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Started |
Started container dnsmasq-dns | |
openstack |
multus |
neutron-7cd95f9d78-s2fkv |
AddedInterface |
Add internalapi [172.17.0.33/24] from openstack/internalapi | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
keystone-6b44d66bc9-5zxbb |
Created |
Created container: keystone-api | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Started |
Started container neutron-api | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-trigger |
neutron-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-public-svc-dqnxb" | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-svc |
Requested |
Created new CertificateRequest resource "neutron-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
neutron-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Created |
Created container: neutron-api | |
openstack |
multus |
neutron-7cd95f9d78-s2fkv |
AddedInterface |
Add eth0 [10.128.0.228/23] from ovn-kubernetes | |
openstack |
job-controller |
cinder-7ba05-db-sync |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-route |
Requested |
Created new CertificateRequest resource "neutron-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-route |
Generated |
Stored new private key in temporary Secret resource "neutron-public-route-xk2xn" | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
dnsmasq-dns-849fd5d677 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-849fd5d677-sdj8j | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Created |
Created container: neutron-api | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Started |
Started container neutron-httpd | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Started |
Started container neutron-api | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-7ba05-backup-0 |
AddedInterface |
Add eth0 [10.128.0.231/23] from ovn-kubernetes | |
openstack |
multus |
cinder-7ba05-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:2781f3bed351ce4c77a235e2381576637203459384fd93e05584a0013b3fe93e" | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:7423f71c91f5a1d0aec9dcf0993db6e2495b520b5e5bbcf1615b9ac9759c0a58" | |
openstack |
multus |
cinder-7ba05-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.230/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
cinder-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
cinder-internal-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-internal-svc-4bm6q" | |
openstack |
cert-manager-certificates-request-manager |
cinder-internal-svc |
Requested |
Created new CertificateRequest resource "cinder-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
neutron-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
replicaset-controller |
dnsmasq-dns-6897ccd865 |
SuccessfulCreate |
Created pod: dnsmasq-dns-6897ccd865-b6qgp | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9e7c747eeeefb391dc6dedaaac57fa694c4d08b991c54bb99aa6de77451e792f" | |
openstack |
multus |
cinder-7ba05-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.229/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
cinder-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-6ddd7f485-2r6bg |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.223:5353: i/o timeout | |
openstack |
multus |
cinder-7ba05-api-0 |
AddedInterface |
Add eth0 [10.128.0.233/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Started |
Started container init | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
multus |
dnsmasq-dns-6897ccd865-b6qgp |
AddedInterface |
Add eth0 [10.128.0.232/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-svc |
Requested |
Created new CertificateRequest resource "cinder-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-public-svc-ffhr7" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
cinder-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
cinder-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Created |
Created container: cinder-7ba05-api-log | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Started |
Started container cinder-7ba05-api-log | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
cinder-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-route |
Generated |
Stored new private key in temporary Secret resource "cinder-public-route-f85z4" | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-route |
Requested |
Created new CertificateRequest resource "cinder-public-route-1" | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Killing |
Stopping container neutron-httpd | |
openstack |
statefulset-controller |
cinder-7ba05-api |
SuccessfulDelete |
delete Pod cinder-7ba05-api-0 in StatefulSet cinder-7ba05-api successful | |
openstack |
cert-manager-certificates-issuing |
cinder-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled down replica set neutron-85f97d8d64 to 0 from 1 | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Killing |
Stopping container neutron-api | |
openstack |
replicaset-controller |
neutron-85f97d8d64 |
SuccessfulDelete |
Deleted pod: neutron-85f97d8d64-dfwgh | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.226:9696/": EOF | |
openstack |
replicaset-controller |
neutron-77db675565 |
SuccessfulCreate |
Created pod: neutron-77db675565-g4zz2 | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-77db675565 to 1 from 0 | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" already present on machine | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9e7c747eeeefb391dc6dedaaac57fa694c4d08b991c54bb99aa6de77451e792f" in 6.933s (6.933s including waiting). Image size: 1083121304 bytes. | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:7423f71c91f5a1d0aec9dcf0993db6e2495b520b5e5bbcf1615b9ac9759c0a58" in 6.942s (6.942s including waiting). Image size: 1084059606 bytes. | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Pulled |
Successfully pulled image "registry.redhat.io/ubi9/httpd-24@sha256:e0697f36760789183fefc807dafa3bfeb4098725f923eb9a8f034725a01fbf9f" in 13.764s (13.764s including waiting). Image size: 313866450 bytes. | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Pulled |
Successfully pulled image "registry.redhat.io/ubi9/httpd-24@sha256:e0697f36760789183fefc807dafa3bfeb4098725f923eb9a8f034725a01fbf9f" in 14.956s (14.956s including waiting). Image size: 313866450 bytes. | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:2781f3bed351ce4c77a235e2381576637203459384fd93e05584a0013b3fe93e" in 6.583s (6.583s including waiting). Image size: 1083126547 bytes. | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
neutron-77db675565-g4zz2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:7423f71c91f5a1d0aec9dcf0993db6e2495b520b5e5bbcf1615b9ac9759c0a58" already present on machine | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:2781f3bed351ce4c77a235e2381576637203459384fd93e05584a0013b3fe93e" already present on machine | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
neutron-77db675565-g4zz2 |
Started |
Started container neutron-httpd | |
openstack |
multus |
neutron-77db675565-g4zz2 |
AddedInterface |
Add internalapi [172.17.0.34/24] from openstack/internalapi | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Started |
Started container osp-httpd | |
openstack |
multus |
neutron-77db675565-g4zz2 |
AddedInterface |
Add eth0 [10.128.0.234/23] from ovn-kubernetes | |
openstack |
kubelet |
edpm-b-provisionserver-openstackprovisionserver-5dcffdb788cr7nm |
Created |
Created container: osp-httpd | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Created |
Created container: osp-httpd | |
openstack |
kubelet |
edpm-a-provisionserver-openstackprovisionserver-7544578cbc568v5 |
Started |
Started container osp-httpd | |
openstack |
kubelet |
neutron-77db675565-g4zz2 |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-77db675565-g4zz2 |
Started |
Started container neutron-api | |
openstack |
kubelet |
neutron-77db675565-g4zz2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Created |
Created container: cinder-api | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
neutron-77db675565-g4zz2 |
Created |
Created container: neutron-api | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Killing |
Stopping container cinder-api | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Started |
Started container cinder-api | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9e7c747eeeefb391dc6dedaaac57fa694c4d08b991c54bb99aa6de77451e792f" already present on machine | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Killing |
Stopping container cinder-7ba05-api-log | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
dnsmasq-dns-849fd5d677-sdj8j |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.225:5353: i/o timeout | |
| (x2) | openstack |
statefulset-controller |
cinder-7ba05-api |
SuccessfulCreate |
create Pod cinder-7ba05-api-0 in StatefulSet cinder-7ba05-api successful |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" already present on machine | |
openstack |
multus |
cinder-7ba05-api-0 |
AddedInterface |
Add eth0 [10.128.0.235/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Created |
Created container: cinder-7ba05-api-log | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Started |
Started container cinder-7ba05-api-log | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" already present on machine | |
openstack |
job-controller |
edpm-a-provisionserver-checksum-discovery |
SuccessfulCreate |
Created pod: edpm-a-provisionserver-checksum-discovery-lfsjb | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Created |
Created container: cinder-api | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Started |
Started container cinder-api | |
openstack |
multus |
edpm-a-provisionserver-checksum-discovery-lfsjb |
AddedInterface |
Add eth0 [10.128.0.236/23] from ovn-kubernetes | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Pulled |
Container image "quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:894ce79b38510973ca610423cc34a7383b7761b6ceb47d18637daffaa93336f7" already present on machine | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Killing |
Stopping container cinder-volume | |
openstack |
statefulset-controller |
cinder-7ba05-volume-lvm-iscsi |
SuccessfulDelete |
delete Pod cinder-7ba05-volume-lvm-iscsi-0 in StatefulSet cinder-7ba05-volume-lvm-iscsi successful | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Created |
Created container: init | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Killing |
Stopping container cinder-backup | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Killing |
Stopping container probe | |
openstack |
statefulset-controller |
cinder-7ba05-backup |
SuccessfulDelete |
delete Pod cinder-7ba05-backup-0 in StatefulSet cinder-7ba05-backup successful | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Started |
Started container init | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Killing |
Stopping container probe | |
openstack |
statefulset-controller |
cinder-7ba05-scheduler |
SuccessfulDelete |
delete Pod cinder-7ba05-scheduler-0 in StatefulSet cinder-7ba05-scheduler successful | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Killing |
Stopping container cinder-scheduler | |
openstack |
replicaset-controller |
dnsmasq-dns-7cb6bf676c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7cb6bf676c-xlvsw | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Killing |
Stopping container dnsmasq-dns | |
| (x2) | openstack |
statefulset-controller |
cinder-7ba05-volume-lvm-iscsi |
SuccessfulCreate |
create Pod cinder-7ba05-volume-lvm-iscsi-0 in StatefulSet cinder-7ba05-volume-lvm-iscsi successful |
openstack |
job-controller |
edpm-b-provisionserver-checksum-discovery |
SuccessfulCreate |
Created pod: edpm-b-provisionserver-checksum-discovery-x7j8z | |
openstack |
multus |
edpm-b-provisionserver-checksum-discovery-x7j8z |
AddedInterface |
Add eth0 [10.128.0.238/23] from ovn-kubernetes | |
| (x2) | openstack |
statefulset-controller |
cinder-7ba05-backup |
SuccessfulCreate |
create Pod cinder-7ba05-backup-0 in StatefulSet cinder-7ba05-backup successful |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:7423f71c91f5a1d0aec9dcf0993db6e2495b520b5e5bbcf1615b9ac9759c0a58" already present on machine | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:7423f71c91f5a1d0aec9dcf0993db6e2495b520b5e5bbcf1615b9ac9759c0a58" already present on machine | |
openstack |
multus |
cinder-7ba05-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.237/23] from ovn-kubernetes | |
| (x2) | openstack |
statefulset-controller |
cinder-7ba05-scheduler |
SuccessfulCreate |
create Pod cinder-7ba05-scheduler-0 in StatefulSet cinder-7ba05-scheduler successful |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Started |
Started container init | |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Created |
Created container: init | |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Pulled |
Container image "quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:894ce79b38510973ca610423cc34a7383b7761b6ceb47d18637daffaa93336f7" already present on machine | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
multus |
cinder-7ba05-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9e7c747eeeefb391dc6dedaaac57fa694c4d08b991c54bb99aa6de77451e792f" already present on machine | |
openstack |
kubelet |
cinder-7ba05-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
multus |
cinder-7ba05-backup-0 |
AddedInterface |
Add eth0 [10.128.0.239/23] from ovn-kubernetes | |
openstack |
multus |
cinder-7ba05-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.240/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:2781f3bed351ce4c77a235e2381576637203459384fd93e05584a0013b3fe93e" already present on machine | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:2781f3bed351ce4c77a235e2381576637203459384fd93e05584a0013b3fe93e" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7cb6bf676c-xlvsw |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.217:5353: i/o timeout | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:55a1ea09e9211b1719a2377854012765e767931e1d437fc1b5d4722863dcb4fb" | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9e7c747eeeefb391dc6dedaaac57fa694c4d08b991c54bb99aa6de77451e792f" already present on machine | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-7ba05-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-7ba05-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:55a1ea09e9211b1719a2377854012765e767931e1d437fc1b5d4722863dcb4fb" in 4.617s (4.617s including waiting). Image size: 159969701 bytes. | |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:55a1ea09e9211b1719a2377854012765e767931e1d437fc1b5d4722863dcb4fb" | |
openstack |
kubelet |
neutron-85f97d8d64-dfwgh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.226:9696/": dial tcp 10.128.0.226:9696: connect: connection refused | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.235:8776/healthcheck": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-67c9b9475d to 1 | |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:55a1ea09e9211b1719a2377854012765e767931e1d437fc1b5d4722863dcb4fb" in 895ms (895ms including waiting). Image size: 159969701 bytes. | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Started |
Started container edpm-a-provisionserver-checksum-discovery | |
openstack |
kubelet |
edpm-a-provisionserver-checksum-discovery-lfsjb |
Created |
Created container: edpm-a-provisionserver-checksum-discovery | |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Created |
Created container: edpm-b-provisionserver-checksum-discovery | |
openstack |
kubelet |
edpm-b-provisionserver-checksum-discovery-x7j8z |
Started |
Started container edpm-b-provisionserver-checksum-discovery | |
openstack |
kubelet |
cinder-7ba05-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.235:8776/healthcheck": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
replicaset-controller |
placement-67c9b9475d |
SuccessfulCreate |
Created pod: placement-67c9b9475d-ksb2w | |
openstack |
multus |
placement-67c9b9475d-ksb2w |
AddedInterface |
Add eth0 [10.128.0.241/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-67c9b9475d-ksb2w |
Started |
Started container placement-log | |
openstack |
kubelet |
placement-67c9b9475d-ksb2w |
Started |
Started container placement-api | |
openstack |
kubelet |
placement-67c9b9475d-ksb2w |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-67c9b9475d-ksb2w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:b8a5d052890fb9cefa333baf10b607add227ed5d79aa108b576a97b21e89327a" already present on machine | |
openstack |
kubelet |
placement-67c9b9475d-ksb2w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:b8a5d052890fb9cefa333baf10b607add227ed5d79aa108b576a97b21e89327a" already present on machine | |
openstack |
kubelet |
placement-67c9b9475d-ksb2w |
Created |
Created container: placement-log | |
openstack |
job-controller |
edpm-a-provisionserver-checksum-discovery |
Completed |
Job completed | |
openstack |
metallb-speaker |
keystone-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
edpm-b-provisionserver-checksum-discovery |
Completed |
Job completed | |
openstack |
metallb-speaker |
cinder-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
replicaset-controller |
neutron-7cd95f9d78 |
SuccessfulDelete |
Deleted pod: neutron-7cd95f9d78-s2fkv | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled down replica set neutron-7cd95f9d78 to 0 from 1 | |
openstack |
kubelet |
openstackclient |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-gh9ft" : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7c0cf75d-1106-4b10-9d2d-c0238d30cd70) does not match the UID in record. The object might have been deleted and then recreated | |
openstack |
multus |
openstackclient |
AddedInterface |
Add eth0 [10.128.0.243/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Killing |
Stopping container neutron-api | |
openstack |
kubelet |
openstackclient |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:d765b589a5f7bc8573b3b192ed265654699012e6342cc4829bd8ea65a7b239a5" | |
openstack |
kubelet |
neutron-7cd95f9d78-s2fkv |
Killing |
Stopping container neutron-httpd | |
| (x5) | openstack |
metallb-speaker |
neutron-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
replicaset-controller |
swift-proxy-77dc968fc8 |
SuccessfulCreate |
Created pod: swift-proxy-77dc968fc8-nnkkj | |
openstack |
deployment-controller |
swift-proxy |
ScalingReplicaSet |
Scaled up replica set swift-proxy-77dc968fc8 to 1 | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Started |
Started container proxy-httpd | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Created |
Created container: proxy-httpd | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:6c3eb966650a7a98feb4ddb31e1bdba1095b0c62e349196aca6a423681d7e5fb" already present on machine | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:6c3eb966650a7a98feb4ddb31e1bdba1095b0c62e349196aca6a423681d7e5fb" already present on machine | |
openstack |
multus |
swift-proxy-77dc968fc8-nnkkj |
AddedInterface |
Add eth0 [10.128.0.244/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Started |
Started container proxy-server | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Created |
Created container: proxy-server | |
openstack |
kubelet |
openstackclient |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:d765b589a5f7bc8573b3b192ed265654699012e6342cc4829bd8ea65a7b239a5" in 15.16s (15.16s including waiting). Image size: 594372457 bytes. | |
openstack |
kubelet |
openstackclient |
Created |
Created container: openstackclient | |
openstack |
metallb-speaker |
swift-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
kubelet |
openstackclient |
Started |
Started container openstackclient | |
openstack |
job-controller |
nova-api-db-create |
SuccessfulCreate |
Created pod: nova-api-db-create-j7tk6 | |
openstack |
job-controller |
nova-cell0-db-create |
SuccessfulCreate |
Created pod: nova-cell0-db-create-zlb6q | |
openstack |
multus |
nova-api-db-create-j7tk6 |
AddedInterface |
Add eth0 [10.128.0.245/23] from ovn-kubernetes | |
openstack |
job-controller |
nova-api-8395-account-create-update |
SuccessfulCreate |
Created pod: nova-api-8395-account-create-update-8nzrz | |
openstack |
kubelet |
nova-api-db-create-j7tk6 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Killing |
Stopping container glance-log | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
kubelet |
nova-api-db-create-j7tk6 |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-api-db-create-j7tk6 |
Created |
Created container: mariadb-database-create | |
| (x2) | openstack |
statefulset-controller |
glance-3a5fd-default-external-api |
SuccessfulDelete |
delete Pod glance-3a5fd-default-external-api-0 in StatefulSet glance-3a5fd-default-external-api successful |
openstack |
job-controller |
nova-cell1-db-create |
SuccessfulCreate |
Created pod: nova-cell1-db-create-jshcc | |
openstack |
kubelet |
nova-api-8395-account-create-update-8nzrz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Killing |
Stopping container glance-log | |
openstack |
kubelet |
nova-api-8395-account-create-update-8nzrz |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
nova-api-8395-account-create-update-8nzrz |
AddedInterface |
Add eth0 [10.128.0.247/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Killing |
Stopping container glance-httpd | |
| (x2) | openstack |
statefulset-controller |
glance-3a5fd-default-internal-api |
SuccessfulDelete |
delete Pod glance-3a5fd-default-internal-api-0 in StatefulSet glance-3a5fd-default-internal-api successful |
openstack |
kubelet |
nova-cell0-db-create-zlb6q |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell0-db-create-zlb6q |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
multus |
nova-cell0-db-create-zlb6q |
AddedInterface |
Add eth0 [10.128.0.246/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell1-db-create-jshcc |
AddedInterface |
Add eth0 [10.128.0.248/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-db-create-zlb6q |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-api-8395-account-create-update-8nzrz |
Started |
Started container mariadb-account-create-update | |
openstack |
job-controller |
nova-cell0-ea37-account-create-update |
SuccessfulCreate |
Created pod: nova-cell0-ea37-account-create-update-c8nf7 | |
openstack |
kubelet |
nova-cell1-db-create-jshcc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
nova-cell1-db-create-jshcc |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-jshcc |
Started |
Started container mariadb-database-create | |
openstack |
job-controller |
nova-cell1-5ec6-account-create-update |
SuccessfulCreate |
Created pod: nova-cell1-5ec6-account-create-update-fn8fv | |
openstack |
job-controller |
nova-api-db-create |
Completed |
Job completed | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled down replica set placement-687479ff9d to 0 from 1 | |
openstack |
replicaset-controller |
placement-687479ff9d |
SuccessfulDelete |
Deleted pod: placement-687479ff9d-8shw8 | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Killing |
Stopping container placement-log | |
openstack |
multus |
nova-cell1-5ec6-account-create-update-fn8fv |
AddedInterface |
Add eth0 [10.128.0.250/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell0-ea37-account-create-update-c8nf7 |
AddedInterface |
Add eth0 [10.128.0.249/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-ea37-account-create-update-c8nf7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
placement-687479ff9d-8shw8 |
Killing |
Stopping container placement-api | |
openstack |
kubelet |
nova-cell0-ea37-account-create-update-c8nf7 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-cell1-5ec6-account-create-update-fn8fv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95" already present on machine | |
openstack |
kubelet |
nova-cell0-ea37-account-create-update-c8nf7 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-cell1-5ec6-account-create-update-fn8fv |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-cell1-5ec6-account-create-update-fn8fv |
Created |
Created container: mariadb-account-create-update | |
openstack |
job-controller |
nova-api-8395-account-create-update |
Completed |
Job completed | |
| (x3) | openstack |
statefulset-controller |
glance-3a5fd-default-external-api |
SuccessfulCreate |
create Pod glance-3a5fd-default-external-api-0 in StatefulSet glance-3a5fd-default-external-api successful |
openstack |
job-controller |
nova-cell0-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-db-create |
Completed |
Job completed | |
openstack |
multus |
glance-3a5fd-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.251/23] from ovn-kubernetes | |
| (x3) | openstack |
statefulset-controller |
glance-3a5fd-default-internal-api |
SuccessfulCreate |
create Pod glance-3a5fd-default-internal-api-0 in StatefulSet glance-3a5fd-default-internal-api successful |
openstack |
job-controller |
nova-cell1-5ec6-account-create-update |
Completed |
Job completed | |
| (x5) | openstack |
metallb-speaker |
placement-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
multus |
glance-3a5fd-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.252/23] from ovn-kubernetes | |
openstack |
multus |
glance-3a5fd-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
job-controller |
nova-cell0-ea37-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
multus |
glance-3a5fd-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell0-conductor-db-sync-x9mns | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" already present on machine | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-3a5fd-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
multus |
nova-cell0-conductor-db-sync-x9mns |
AddedInterface |
Add eth0 [10.128.0.253/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-x9mns |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-3a5fd-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-x9mns |
Created |
Created container: nova-cell0-conductor-db-sync | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-x9mns |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" in 7.898s (7.898s including waiting). Image size: 667925477 bytes. | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-x9mns |
Started |
Started container nova-cell0-conductor-db-sync | |
| (x3) | openstack |
metallb-speaker |
glance-default-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
Completed |
Job completed | |
openstack |
statefulset-controller |
nova-cell0-conductor |
SuccessfulCreate |
create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful | |
openstack |
multus |
nova-cell0-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.254/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Started |
Started container nova-cell0-conductor-conductor | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Created |
Created container: nova-cell0-conductor-conductor | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" already present on machine | |
openstack |
metallb-controller |
nova-metadata-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
job-controller |
nova-cell0-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell0-cell-mapping-t8sfd | |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-trigger |
nova-metadata-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
nova-cell0-cell-mapping-t8sfd |
AddedInterface |
Add eth0 [10.128.0.255/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" | |
openstack |
kubelet |
nova-cell0-cell-mapping-t8sfd |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell0-cell-mapping-t8sfd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" already present on machine | |
openstack |
replicaset-controller |
dnsmasq-dns-b4cc6f549 |
SuccessfulCreate |
Created pod: dnsmasq-dns-b4cc6f549-55sdk | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.0/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
nova-metadata-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
nova-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:d89e44b4641e8bd60abf1b674253975596fafc490022169681555069174a414e" | |
openstack |
kubelet |
nova-metadata-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" | |
openstack |
kubelet |
nova-cell0-cell-mapping-t8sfd |
Started |
Started container nova-manage | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.3/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.2/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell1-conductor-db-sync-s6rxr | |
openstack |
cert-manager-certificaterequests-approver |
nova-metadata-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
multus |
dnsmasq-dns-b4cc6f549-55sdk |
AddedInterface |
Add eth0 [10.128.1.4/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
nova-metadata-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-metadata-internal-svc-qfc69" | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.1/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:228b3c59ea6527048a4b3d1e340c15f22dcf9f9ba8f302d6263f2e4ef79463ff" | |
openstack |
cert-manager-certificates-request-manager |
nova-metadata-internal-svc |
Requested |
Created new CertificateRequest resource "nova-metadata-internal-svc-1" | |
openstack |
multus |
nova-cell1-conductor-db-sync-s6rxr |
AddedInterface |
Add eth0 [10.128.1.5/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-s6rxr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-route |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-x5hg9" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-svc |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-5qvjq" | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-vencrypt |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-vencrypt |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-pmxxp" | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-vencrypt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-vencrypt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-vencrypt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" in 3.325s (3.325s including waiting). Image size: 684724449 bytes. | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:228b3c59ea6527048a4b3d1e340c15f22dcf9f9ba8f302d6263f2e4ef79463ff" in 3.614s (3.614s including waiting). Image size: 670289900 bytes. | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-s6rxr |
Created |
Created container: nova-cell1-conductor-db-sync | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-s6rxr |
Started |
Started container nova-cell1-conductor-db-sync | |
openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulDelete |
delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:d89e44b4641e8bd60abf1b674253975596fafc490022169681555069174a414e" in 3.169s (3.169s including waiting). Image size: 667925989 bytes. | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" in 3.936s (3.936s including waiting). Image size: 684724449 bytes. | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Killing |
Stopping container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.6/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
replicaset-controller |
dnsmasq-dns-6897ccd865 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6897ccd865-b6qgp | |
openstack |
kubelet |
dnsmasq-dns-6897ccd865-b6qgp |
Killing |
Stopping container dnsmasq-dns | |
openstack |
job-controller |
nova-cell0-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.0:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.0:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.7/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
Completed |
Job completed | |
openstack |
statefulset-controller |
nova-cell1-conductor |
SuccessfulCreate |
create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful | |
openstack |
multus |
nova-cell1-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.8/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Started |
Started container nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Created |
Created container: nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:d89e44b4641e8bd60abf1b674253975596fafc490022169681555069174a414e" already present on machine | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.9/23] from ovn-kubernetes | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.10/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.7:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.7:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.10:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.10:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulCreate |
create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful |
openstack |
replicaset-controller |
dnsmasq-dns-5687765f45 |
SuccessfulCreate |
Created pod: dnsmasq-dns-5687765f45-jhnth | |
| (x25) | openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set dnsmasq-dns-5687765f45 to 1 |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
nova-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
cert-manager-certificates-trigger |
nova-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
nova-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
dnsmasq-dns-5687765f45-jhnth |
AddedInterface |
Add eth0 [10.128.1.12/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:228b3c59ea6527048a4b3d1e340c15f22dcf9f9ba8f302d6263f2e4ef79463ff" already present on machine | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.11/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
nova-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-internal-svc-6862m" | |
openstack |
cert-manager-certificates-request-manager |
nova-internal-svc |
Requested |
Created new CertificateRequest resource "nova-internal-svc-1" | |
openstack |
cert-manager-certificates-trigger |
nova-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-5687765f45-jhnth |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-5687765f45-jhnth |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-5687765f45-jhnth |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
nova-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-public-route-z2c5x" | |
openstack |
cert-manager-certificates-request-manager |
nova-public-route |
Requested |
Created new CertificateRequest resource "nova-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
nova-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
nova-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-public-svc-qdw6z" | |
openstack |
cert-manager-certificates-request-manager |
nova-public-svc |
Requested |
Created new CertificateRequest resource "nova-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-5687765f45-jhnth |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-5687765f45-jhnth |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-5687765f45-jhnth |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.13/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
job-controller |
nova-cell1-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell1-cell-mapping-nzzbt | |
openstack |
kubelet |
dnsmasq-dns-b4cc6f549-55sdk |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-b4cc6f549 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-b4cc6f549-55sdk | |
openstack |
kubelet |
nova-cell1-cell-mapping-nzzbt |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-nzzbt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:d347d48e9a8ae4136dd99c5222480ceccb2819beaf80b11048644d4acf0a4305" already present on machine | |
openstack |
kubelet |
nova-cell1-cell-mapping-nzzbt |
Created |
Created container: nova-manage | |
openstack |
multus |
nova-cell1-cell-mapping-nzzbt |
AddedInterface |
Add eth0 [10.128.1.14/23] from ovn-kubernetes | |
| (x11) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-nodes of Type *v1.Service |
| (x11) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-nodes of Type *v1.Service |
openstack |
job-controller |
nova-cell1-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.13:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulDelete |
delete Pod nova-metadata-0 in StatefulSet nova-metadata successful |
| (x2) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulDelete |
delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.13:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openstack |
statefulset-controller |
nova-api |
SuccessfulDelete |
delete Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
| (x4) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulCreate |
create Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.15/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
| (x3) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulCreate |
create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
| (x4) | openstack |
statefulset-controller |
nova-api |
SuccessfulCreate |
create Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.17/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:d89e44b4641e8bd60abf1b674253975596fafc490022169681555069174a414e" already present on machine | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.16/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:faf711e1e5fa2ad74a73d3dfffd88f6312fb045cb69e9b7e6331558784163d16" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.15:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.15:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.16:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.16:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openstack |
metallb-speaker |
nova-metadata-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
metallb-speaker |
nova-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
job-controller |
keystone-cron-29565241 |
SuccessfulCreate |
Created pod: keystone-cron-29565241-vpcdg | |
openstack |
cronjob-controller |
keystone-cron |
SuccessfulCreate |
Created job keystone-cron-29565241 | |
openstack |
multus |
keystone-cron-29565241-vpcdg |
AddedInterface |
Add eth0 [10.128.1.18/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-cron-29565241-vpcdg |
Started |
Started container keystone-cron | |
openstack |
kubelet |
keystone-cron-29565241-vpcdg |
Created |
Created container: keystone-cron | |
openstack |
kubelet |
keystone-cron-29565241-vpcdg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" already present on machine | |
openstack |
job-controller |
keystone-cron-29565241 |
Completed |
Job completed | |
openstack |
cronjob-controller |
keystone-cron |
SawCompletedJob |
Saw completed job: keystone-cron-29565241, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
default |
operator-lifecycle-manager |
metallb-system |
ResolutionFailed |
error using catalogsource openshift-marketplace/redhat-operators: error encountered while listing bundles: rpc error: code = DeadlineExceeded desc = context deadline exceeded | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
kubelet |
swift-proxy-77dc968fc8-nnkkj |
Unhealthy |
Liveness probe failed: HTTP probe failed with statuscode: 502 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.137:6080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mgklh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.137:6080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
cronjob-controller |
keystone-cron |
SuccessfulCreate |
Created job keystone-cron-29565301 | |
openstack |
job-controller |
keystone-cron-29565301 |
SuccessfulCreate |
Created pod: keystone-cron-29565301-skqb8 | |
openstack |
kubelet |
keystone-cron-29565301-skqb8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:3a148c87899d4cbbb6bbe1203ad6c237fb295b5f42abda425dc0329305723414" already present on machine | |
openstack |
multus |
keystone-cron-29565301-skqb8 |
AddedInterface |
Add eth0 [10.128.1.19/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-cron-29565301-skqb8 |
Started |
Started container keystone-cron | |
openstack |
kubelet |
keystone-cron-29565301-skqb8 |
Created |
Created container: keystone-cron | |
openstack |
cronjob-controller |
keystone-cron |
SawCompletedJob |
Saw completed job: keystone-cron-29565301, condition: Complete | |
openstack |
job-controller |
keystone-cron-29565301 |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-p4zsp namespace |