[WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (edpm_node_hostname). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (edpm_node_ip). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (source_galera_members). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (source_mariadb_ip). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (enable_tlse). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (neutron_qe_test). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (tobiko_qe_test). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (neutron_qe_dir). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (prelaunch_barbican_secret). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (os_cloud_name). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (standalone_ip). Using last defined value only. Using /home/zuul/src/review.rdoproject.org/rdo-jobs/playbooks/data_plane_adoption/ansible.cfg as config file PLAY [Externalize Ceph] ******************************************************** TASK [Gathering Facts] ********************************************************* ok: [np0005625199.localdomain] TASK [ceph_migrate : Check file in the src directory] ************************** [WARNING]: Skipped '/home/tripleo-admin/ceph_client' path due to this access issue: '/home/tripleo-admin/ceph_client' is not a directory ok: [np0005625199.localdomain] => {"changed": false, "examined": 0, "files": [], "matched": 0, "msg": "Not all paths examined, check warnings for details", "skipped_paths": {"/home/tripleo-admin/ceph_client": "'/home/tripleo-admin/ceph_client' is not a directory"}} TASK [ceph_migrate : Restore files] ******************************************** skipping: [np0005625199.localdomain] => (item=ceph.conf) => {"ansible_loop_var": "item", "changed": false, "false_condition": "dir_ceph_files.files | length > 0", "item": "ceph.conf", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": false, "false_condition": "dir_ceph_files.files | length > 0", "item": "ceph.client.admin.keyring", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure backup directory exists] *************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Health] ****************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "-s", "-f", "json"], "delta": "0:00:02.817668", "end": "2026-02-20 09:38:45.120933", "msg": "", "rc": 0, "start": "2026-02-20 09:38:42.303265", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":14,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625199\",\"np0005625201\",\"np0005625200\"],\"quorum_age\":7170,\"monmap\":{\"epoch\":3,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":81,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":570081280,\"bytes_avail\":44501909504,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":7,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625201.iogohb\",\"status\":\"up:active\",\"gid\":24328}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":67,\"modified\":\"2026-02-20T09:38:18.259787+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":14,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625199\",\"np0005625201\",\"np0005625200\"],\"quorum_age\":7170,\"monmap\":{\"epoch\":3,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":81,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":570081280,\"bytes_avail\":44501909504,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":7,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625201.iogohb\",\"status\":\"up:active\",\"gid\":24328}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":67,\"modified\":\"2026-02-20T09:38:18.259787+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Load ceph data] ******************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph": {"election_epoch": 14, "fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "fsmap": {"by_rank": [{"filesystem_id": 1, "gid": 24328, "name": "mds.np0005625201.iogohb", "rank": 0, "status": "up:active"}], "epoch": 7, "id": 1, "in": 1, "max": 1, "up": 1, "up:standby": 2}, "health": {"checks": {}, "mutes": [], "status": "HEALTH_OK"}, "mgrmap": {"available": true, "modules": ["cephadm", "iostat", "nfs", "restful"], "num_standbys": 2, "services": {}}, "monmap": {"epoch": 3, "min_mon_release_name": "reef", "num_mons": 3}, "osdmap": {"epoch": 81, "num_in_osds": 6, "num_osds": 6, "num_remapped_pgs": 0, "num_up_osds": 6, "osd_in_since": 1771573176, "osd_up_since": 1771573196}, "pgmap": {"bytes_avail": 44501909504, "bytes_total": 45071990784, "bytes_used": 570081280, "data_bytes": 109571242, "num_objects": 69, "num_pgs": 177, "num_pools": 7, "pgs_by_state": [{"count": 177, "state_name": "active+clean"}]}, "progress_events": {}, "quorum": [0, 1, 2], "quorum_age": 7170, "quorum_names": ["np0005625199", "np0005625201", "np0005625200"], "servicemap": {"epoch": 67, "modified": "2026-02-20T09:38:18.259787+0000", "services": {}}}}, "changed": false} TASK [ceph_migrate : Dump ceph -s output to log file] ************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Orch ServiceMap] ********************************* changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "ls", "-f", "json"], "delta": "0:00:02.997069", "end": "2026-02-20 09:38:48.870732", "msg": "", "rc": 0, "start": "2026-02-20 09:38:45.873663", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"events\": [\"2026-02-20T07:39:30.778579Z service:crash [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-20T07:37:30.228656Z\", \"last_refresh\": \"2026-02-20T09:32:12.732011Z\", \"running\": 6, \"size\": 6}}, {\"events\": [\"2026-02-20T07:58:32.809588Z service:mds.mds [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005625199.localdomain\", \"np0005625200.localdomain\", \"np0005625201.localdomain\"]}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-20T07:58:22.096582Z\", \"last_refresh\": \"2026-02-20T09:32:12.732137Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T07:39:16.837579Z service:mgr [INFO] \\\"service was created\\\"\", \"2026-02-20T07:38:20.880815Z service:mgr [ERROR] \\\"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005625199.localdomain\", \"np0005625200.localdomain\", \"np0005625201.localdomain\"]}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-20T07:38:11.420295Z\", \"last_refresh\": \"2026-02-20T09:32:12.731875Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T07:39:08.172132Z service:mon [INFO] \\\"service was created\\\"\", \"2026-02-20T07:38:20.879430Z service:mon [ERROR] \\\"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005625199.localdomain\", \"np0005625200.localdomain\", \"np0005625201.localdomain\"]}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-20T07:38:11.412226Z\", \"last_refresh\": \"2026-02-20T09:32:12.731692Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T07:37:44.405855Z service:node-proxy [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-20T07:37:44.380465Z\", \"running\": 0, \"size\": 0}}, {\"events\": [\"2026-02-20T07:38:11.435774Z service:osd.default_drive_group [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005625202.localdomain\", \"np0005625203.localdomain\", \"np0005625204.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-20T07:38:11.428636Z\", \"last_refresh\": \"2026-02-20T09:35:20.408179Z\", \"running\": 6, \"size\": 6}}]", "stdout_lines": ["", "[{\"events\": [\"2026-02-20T07:39:30.778579Z service:crash [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-20T07:37:30.228656Z\", \"last_refresh\": \"2026-02-20T09:32:12.732011Z\", \"running\": 6, \"size\": 6}}, {\"events\": [\"2026-02-20T07:58:32.809588Z service:mds.mds [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005625199.localdomain\", \"np0005625200.localdomain\", \"np0005625201.localdomain\"]}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-20T07:58:22.096582Z\", \"last_refresh\": \"2026-02-20T09:32:12.732137Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T07:39:16.837579Z service:mgr [INFO] \\\"service was created\\\"\", \"2026-02-20T07:38:20.880815Z service:mgr [ERROR] \\\"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005625199.localdomain\", \"np0005625200.localdomain\", \"np0005625201.localdomain\"]}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-20T07:38:11.420295Z\", \"last_refresh\": \"2026-02-20T09:32:12.731875Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T07:39:08.172132Z service:mon [INFO] \\\"service was created\\\"\", \"2026-02-20T07:38:20.879430Z service:mon [ERROR] \\\"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005625199.localdomain\", \"np0005625200.localdomain\", \"np0005625201.localdomain\"]}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-20T07:38:11.412226Z\", \"last_refresh\": \"2026-02-20T09:32:12.731692Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T07:37:44.405855Z service:node-proxy [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-20T07:37:44.380465Z\", \"running\": 0, \"size\": 0}}, {\"events\": [\"2026-02-20T07:38:11.435774Z service:osd.default_drive_group [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005625202.localdomain\", \"np0005625203.localdomain\", \"np0005625204.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-20T07:38:11.428636Z\", \"last_refresh\": \"2026-02-20T09:35:20.408179Z\", \"running\": 6, \"size\": 6}}]"]} TASK [ceph_migrate : Load Service Map] ***************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"servicemap": [{"events": ["2026-02-20T07:39:30.778579Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-20T07:37:30.228656Z", "last_refresh": "2026-02-20T09:32:12.732011Z", "running": 6, "size": 6}}, {"events": ["2026-02-20T07:58:32.809588Z service:mds.mds [INFO] \"service was created\""], "placement": {"hosts": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-20T07:58:22.096582Z", "last_refresh": "2026-02-20T09:32:12.732137Z", "running": 3, "size": 3}}, {"events": ["2026-02-20T07:39:16.837579Z service:mgr [INFO] \"service was created\"", "2026-02-20T07:38:20.880815Z service:mgr [ERROR] \"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-20T07:38:11.420295Z", "last_refresh": "2026-02-20T09:32:12.731875Z", "running": 3, "size": 3}}, {"events": ["2026-02-20T07:39:08.172132Z service:mon [INFO] \"service was created\"", "2026-02-20T07:38:20.879430Z service:mon [ERROR] \"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-20T07:38:11.412226Z", "last_refresh": "2026-02-20T09:32:12.731692Z", "running": 3, "size": 3}}, {"events": ["2026-02-20T07:37:44.405855Z service:node-proxy [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-20T07:37:44.380465Z", "running": 0, "size": 0}}, {"events": ["2026-02-20T07:38:11.435774Z service:osd.default_drive_group [INFO] \"service was created\""], "placement": {"hosts": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-20T07:38:11.428636Z", "last_refresh": "2026-02-20T09:35:20.408179Z", "running": 6, "size": 6}}]}, "changed": false} TASK [ceph_migrate : Print Service Map] **************************************** skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T07:39:30.778579Z service:crash [INFO] "service was created"'], 'placement': {'host_pattern': '*'}, 'service_name': 'crash', 'service_type': 'crash', 'status': {'created': '2026-02-20T07:37:30.228656Z', 'last_refresh': '2026-02-20T09:32:12.732011Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T07:39:30.778579Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-20T07:37:30.228656Z", "last_refresh": "2026-02-20T09:32:12.732011Z", "running": 6, "size": 6}}} skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T07:58:32.809588Z service:mds.mds [INFO] "service was created"'], 'placement': {'hosts': ['np0005625199.localdomain', 'np0005625200.localdomain', 'np0005625201.localdomain']}, 'service_id': 'mds', 'service_name': 'mds.mds', 'service_type': 'mds', 'status': {'created': '2026-02-20T07:58:22.096582Z', 'last_refresh': '2026-02-20T09:32:12.732137Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T07:58:32.809588Z service:mds.mds [INFO] \"service was created\""], "placement": {"hosts": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-20T07:58:22.096582Z", "last_refresh": "2026-02-20T09:32:12.732137Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T07:39:16.837579Z service:mgr [INFO] "service was created"', '2026-02-20T07:38:20.880815Z service:mgr [ERROR] "Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts"'], 'placement': {'hosts': ['np0005625199.localdomain', 'np0005625200.localdomain', 'np0005625201.localdomain']}, 'service_name': 'mgr', 'service_type': 'mgr', 'status': {'created': '2026-02-20T07:38:11.420295Z', 'last_refresh': '2026-02-20T09:32:12.731875Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T07:39:16.837579Z service:mgr [INFO] \"service was created\"", "2026-02-20T07:38:20.880815Z service:mgr [ERROR] \"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-20T07:38:11.420295Z", "last_refresh": "2026-02-20T09:32:12.731875Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T07:39:08.172132Z service:mon [INFO] "service was created"', '2026-02-20T07:38:20.879430Z service:mon [ERROR] "Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts"'], 'placement': {'hosts': ['np0005625199.localdomain', 'np0005625200.localdomain', 'np0005625201.localdomain']}, 'service_name': 'mon', 'service_type': 'mon', 'status': {'created': '2026-02-20T07:38:11.412226Z', 'last_refresh': '2026-02-20T09:32:12.731692Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T07:39:08.172132Z service:mon [INFO] \"service was created\"", "2026-02-20T07:38:20.879430Z service:mon [ERROR] \"Failed to apply: Cannot place on np0005625201.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-20T07:38:11.412226Z", "last_refresh": "2026-02-20T09:32:12.731692Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T07:37:44.405855Z service:node-proxy [INFO] "service was created"'], 'placement': {'host_pattern': '*'}, 'service_name': 'node-proxy', 'service_type': 'node-proxy', 'status': {'created': '2026-02-20T07:37:44.380465Z', 'running': 0, 'size': 0}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T07:37:44.405855Z service:node-proxy [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-20T07:37:44.380465Z", "running": 0, "size": 0}}} skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T07:38:11.435774Z service:osd.default_drive_group [INFO] "service was created"'], 'placement': {'hosts': ['np0005625202.localdomain', 'np0005625203.localdomain', 'np0005625204.localdomain']}, 'service_id': 'default_drive_group', 'service_name': 'osd.default_drive_group', 'service_type': 'osd', 'spec': {'data_devices': {'paths': ['/dev/ceph_vg0/ceph_lv0', '/dev/ceph_vg1/ceph_lv1']}, 'filter_logic': 'AND', 'objectstore': 'bluestore'}, 'status': {'created': '2026-02-20T07:38:11.428636Z', 'last_refresh': '2026-02-20T09:35:20.408179Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T07:38:11.435774Z service:osd.default_drive_group [INFO] \"service was created\""], "placement": {"hosts": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-20T07:38:11.428636Z", "last_refresh": "2026-02-20T09:35:20.408179Z", "running": 6, "size": 6}}} skipping: [np0005625199.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch ls output to log file] ********************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph config] ****************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "config", "dump", "-f", "json"], "delta": "0:00:02.855020", "end": "2026-02-20 09:38:52.517399", "msg": "", "rc": 0, "start": "2026-02-20 09:38:49.662379", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625202\",\"location_type\":\"host\",\"location_value\":\"np0005625202\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625203\",\"location_type\":\"host\",\"location_value\":\"np0005625203\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625204\",\"location_type\":\"host\",\"location_value\":\"np0005625204\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]", "stdout_lines": ["", "[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625202\",\"location_type\":\"host\",\"location_value\":\"np0005625202\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625203\",\"location_type\":\"host\",\"location_value\":\"np0005625203\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625204\",\"location_type\":\"host\",\"location_value\":\"np0005625204\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]"]} TASK [ceph_migrate : Print Ceph config dump] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph config dump output to log file] ***************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Orch Host Map] *********************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "host", "ls", "-f", "json"], "delta": "0:00:02.855704", "end": "2026-02-20 09:38:56.169022", "msg": "", "rc": 0, "start": "2026-02-20 09:38:53.313318", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005625199.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.104\", \"hostname\": \"np0005625200.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.105\", \"hostname\": \"np0005625201.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.106\", \"hostname\": \"np0005625202.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005625203.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005625204.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005625199.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.104\", \"hostname\": \"np0005625200.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.105\", \"hostname\": \"np0005625201.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.106\", \"hostname\": \"np0005625202.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005625203.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005625204.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}]"]} TASK [ceph_migrate : Load nodes] *********************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"nds": [{"addr": "192.168.122.103", "hostname": "np0005625199.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.104", "hostname": "np0005625200.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.105", "hostname": "np0005625201.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.106", "hostname": "np0005625202.localdomain", "labels": ["osd"], "status": ""}, {"addr": "192.168.122.107", "hostname": "np0005625203.localdomain", "labels": ["osd"], "status": ""}, {"addr": "192.168.122.108", "hostname": "np0005625204.localdomain", "labels": ["osd"], "status": ""}]}, "changed": false} TASK [ceph_migrate : Load hostmap List] **************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"hostmap": {"np0005625199.localdomain": ["_admin", "mon", "mgr"], "np0005625200.localdomain": ["_admin", "mon", "mgr"], "np0005625201.localdomain": ["_admin", "mon", "mgr"], "np0005625202.localdomain": ["osd"], "np0005625203.localdomain": ["osd"], "np0005625204.localdomain": ["osd"]}}, "changed": false} TASK [ceph_migrate : Print Host Map] ******************************************* skipping: [np0005625199.localdomain] => (item=np0005625199.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625199.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625200.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625200.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625201.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625201.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625202.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625202.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625203.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625203.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625204.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625204.localdomain"} skipping: [np0005625199.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch host ls output to log file] **************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph monmap and load data] **************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "mon", "dump", "-f", "json"], "delta": "0:00:02.811908", "end": "2026-02-20 09:38:59.841127", "msg": "", "rc": 0, "start": "2026-02-20 09:38:57.029219", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\ndumped monmap epoch 3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/mon.np0005625199/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "dumped monmap epoch 3"], "stdout": "\n{\"epoch\":3,\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"modified\":\"2026-02-20T07:39:09.430412Z\",\"created\":\"2026-02-20T07:36:51.191305Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005625199\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005625201\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005625200\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}", "stdout_lines": ["", "{\"epoch\":3,\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"modified\":\"2026-02-20T07:39:09.430412Z\",\"created\":\"2026-02-20T07:36:51.191305Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005625199\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005625201\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005625200\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}"]} TASK [ceph_migrate : Get Monmap] *********************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mon_dump": {"created": "2026-02-20T07:36:51.191305Z", "disallowed_leaders: ": "", "election_strategy": 1, "epoch": 3, "features": {"optional": [], "persistent": ["kraken", "luminous", "mimic", "osdmap-prune", "nautilus", "octopus", "pacific", "elector-pinging", "quincy", "reef"]}, "fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "min_mon_release": 18, "min_mon_release_name": "reef", "modified": "2026-02-20T07:39:09.430412Z", "mons": [{"addr": "172.18.0.103:6789/0", "crush_location": "{}", "name": "np0005625199", "priority": 0, "public_addr": "172.18.0.103:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.103:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.103:6789", "nonce": 0, "type": "v1"}]}, "rank": 0, "weight": 0}, {"addr": "172.18.0.105:6789/0", "crush_location": "{}", "name": "np0005625201", "priority": 0, "public_addr": "172.18.0.105:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.105:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.105:6789", "nonce": 0, "type": "v1"}]}, "rank": 1, "weight": 0}, {"addr": "172.18.0.104:6789/0", "crush_location": "{}", "name": "np0005625200", "priority": 0, "public_addr": "172.18.0.104:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.104:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.104:6789", "nonce": 0, "type": "v1"}]}, "rank": 2, "weight": 0}], "quorum": [0, 1, 2], "removed_ranks: ": "", "stretch_mode": false, "tiebreaker_mon": ""}}, "changed": false} TASK [ceph_migrate : Print monmap] ********************************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph mon dump output to log file] ******************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Load nodes to decommission] ******************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"decomm_nodes": ["np0005625199.localdomain", "np0005625200.localdomain", "np0005625201.localdomain"]}, "changed": false} TASK [ceph_migrate : Load target nodes] **************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"target_nodes": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "changed": false} TASK [ceph_migrate : Print target nodes] *************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Print decomm_nodes] *************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : ansible.builtin.fail if input is not provided] ************ skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph is undefined or ceph | length == 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get cluster health] *************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if health is HEALTH_WARN || HEALTH_ERR] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph.health.status == 'HEALTH_WARN' or ceph.health.status == 'HEALTH_ERR'", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : PgMap] **************************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if PGs are not in active+clean state] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "pgstate != 'active+clean'", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : OSDMap] *************************************************** ok: [np0005625199.localdomain] => { "msg": "100.0" } TASK [ceph_migrate : ansible.builtin.fail if there is an unacceptable OSDs number] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "pct | float < 100", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MonMap] *************************************************** skipping: [np0005625199.localdomain] => {"false_condition": "check_ceph_release | default(false) | bool"} TASK [ceph_migrate : ansible.builtin.fail if Ceph <= Quincy] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "check_ceph_release | default(false) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Mons in quorum] ******************************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if Mons are not in quorum] *********** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph.monmap.num_mons < decomm_nodes | length", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : is Ceph Mgr available] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if Mgr is not available] ************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not ceph.mgrmap.available | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : in progress events] *************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if there are in progress events] ***** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph.progress_events | length > 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Dump Ceph Status] ***************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : set container image base in ceph configuration] *********** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_base", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest"], "delta": "0:00:00.688942", "end": "2026-02-20 09:39:02.371483", "msg": "", "rc": 0, "start": "2026-02-20 09:39:01.682541", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : set alertmanager container image in ceph configuration] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set grafana container image in ceph configuration] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set node-exporter container image in ceph configuration] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set prometheus container image in ceph configuration] ***** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set haproxy container image in ceph configuration] ******** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_haproxy", "registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest"], "delta": "0:00:00.821461", "end": "2026-02-20 09:39:03.987598", "msg": "", "rc": 0, "start": "2026-02-20 09:39:03.166137", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Set keepalived container image in ceph configuration] ***** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_keepalived", "registry.redhat.io/rhceph/keepalived-rhel9:latest"], "delta": "0:00:00.755010", "end": "2026-02-20 09:39:05.367037", "msg": "", "rc": 0, "start": "2026-02-20 09:39:04.612027", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Update firewall rules on the target nodes] **************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005625199.localdomain => (item=np0005625202.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005625199.localdomain => (item=np0005625203.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005625199.localdomain => (item=np0005625204.localdomain) TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005625199.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005625199.localdomain -> np0005625202.localdomain(192.168.122.106)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005625199.localdomain -> np0005625202.localdomain(192.168.122.106)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Fri 2026-02-20 07:48:42 UTC", "ActiveEnterTimestampMonotonic": "4268630976", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice sysinit.target basic.target systemd-journald.socket", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Fri 2026-02-20 07:48:42 UTC", "AssertTimestampMonotonic": "4268547420", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "30335000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2026-02-20 07:48:42 UTC", "ConditionTimestampMonotonic": "4268547418", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Fri 2026-02-20 07:48:42 UTC", "ExecMainExitTimestampMonotonic": "4268630695", "ExecMainPID": "43382", "ExecMainStartTimestamp": "Fri 2026-02-20 07:48:42 UTC", "ExecMainStartTimestampMonotonic": "4268548992", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2026-02-20 07:48:42 UTC", "InactiveExitTimestampMonotonic": "4268549241", "InvocationID": "59c2d026adc64bdaae6c13893a116c23", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Fri 2026-02-20 07:48:42 UTC", "StateChangeTimestampMonotonic": "4268630976", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005625199.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005625199.localdomain -> np0005625203.localdomain(192.168.122.107)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005625199.localdomain -> np0005625203.localdomain(192.168.122.107)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ActiveEnterTimestampMonotonic": "4281951147", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice basic.target sysinit.target systemd-journald.socket", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Fri 2026-02-20 07:48:43 UTC", "AssertTimestampMonotonic": "4281868965", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "22126000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ConditionTimestampMonotonic": "4281868963", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ExecMainExitTimestampMonotonic": "4281950814", "ExecMainPID": "42733", "ExecMainStartTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ExecMainStartTimestampMonotonic": "4281870262", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2026-02-20 07:48:43 UTC", "InactiveExitTimestampMonotonic": "4281870446", "InvocationID": "c502debbad894324bdc1e5d00910a5cd", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target system.slice", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Fri 2026-02-20 07:48:43 UTC", "StateChangeTimestampMonotonic": "4281951147", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005625199.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005625199.localdomain -> np0005625204.localdomain(192.168.122.108)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005625199.localdomain -> np0005625204.localdomain(192.168.122.108)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ActiveEnterTimestampMonotonic": "4282986625", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice systemd-journald.socket basic.target sysinit.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Fri 2026-02-20 07:48:43 UTC", "AssertTimestampMonotonic": "4282882245", "Before": "multi-user.target network-pre.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "33289000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ConditionTimestampMonotonic": "4282882244", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ExecMainExitTimestampMonotonic": "4282986238", "ExecMainPID": "42989", "ExecMainStartTimestamp": "Fri 2026-02-20 07:48:43 UTC", "ExecMainStartTimestampMonotonic": "4282895889", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2026-02-20 07:48:43 UTC", "InactiveExitTimestampMonotonic": "4282896083", "InvocationID": "9690e28632f24f9d80f08a7e35f1e173", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Fri 2026-02-20 07:48:43 UTC", "StateChangeTimestampMonotonic": "4282986625", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Get ceph_cli] ********************************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the dashboard port] *********************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the dashboard ssl port] ******************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Disable mgr dashboard module (restart)] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Enable mgr dashboard module (restart)] ******************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the prometheus server port] *************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the prometheus server address] ************************ skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Enable prometheus module] ********************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => (item=['np0005625202.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005625202.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=['np0005625203.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005625203.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=['np0005625204.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005625204.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : MONITORING - Load Spec from the orchestrator] ************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the loaded data] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Update the Monitoring Stack spec definition] ************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : MONITORING - wait daemons] ******************************** skipping: [np0005625199.localdomain] => (item=grafana) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "grafana", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=prometheus) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "prometheus", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=alertmanager) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "alertmanager", "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Sleep before moving to the next daemon] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MDS - Load Spec from the orchestrator] ******************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mds_spec": {"service_name": "mds.mds", "service_type": "mds", "spec": {}}}, "changed": false} TASK [ceph_migrate : Print the resulting MDS spec] ***************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625199.localdomain", "mds"], "delta": "0:00:00.735066", "end": "2026-02-20 09:39:18.999543", "item": ["np0005625199.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:39:18.264477", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005625199.localdomain", "stdout_lines": ["Added label mds to host np0005625199.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625200.localdomain", "mds"], "delta": "0:00:00.679577", "end": "2026-02-20 09:39:20.244891", "item": ["np0005625200.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:39:19.565314", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005625200.localdomain", "stdout_lines": ["Added label mds to host np0005625200.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625201.localdomain", "mds"], "delta": "0:00:00.704271", "end": "2026-02-20 09:39:21.547932", "item": ["np0005625201.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:39:20.843661", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005625201.localdomain", "stdout_lines": ["Added label mds to host np0005625201.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625202.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625202.localdomain", "mds"], "delta": "0:00:00.666815", "end": "2026-02-20 09:39:22.837125", "item": ["np0005625202.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:39:22.170310", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005625202.localdomain", "stdout_lines": ["Added label mds to host np0005625202.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625203.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625203.localdomain", "mds"], "delta": "0:00:00.691051", "end": "2026-02-20 09:39:24.112165", "item": ["np0005625203.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:39:23.421114", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005625203.localdomain", "stdout_lines": ["Added label mds to host np0005625203.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625204.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625204.localdomain", "mds"], "delta": "0:00:01.118028", "end": "2026-02-20 09:39:25.771731", "item": ["np0005625204.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:39:24.653703", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005625204.localdomain", "stdout_lines": ["Added label mds to host np0005625204.localdomain"]} TASK [ceph_migrate : Update the MDS Daemon spec definition] ******************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mds:/home/tripleo-admin/mds:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mds"], "delta": "0:00:00.692063", "end": "2026-02-20 09:39:27.287409", "rc": 0, "start": "2026-02-20 09:39:26.595346", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mds.mds update...", "stdout_lines": ["Scheduled mds.mds update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Wait for the orchestrator to process the spec] ************ Pausing for 30 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2026-02-20 09:39:27.506142", "stderr": "", "stdout": "Paused for 30.03 seconds", "stop": "2026-02-20 09:39:57.540314", "user_input": ""} TASK [ceph_migrate : Reload the updated mdsmap] ******************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "fs", "status", "cephfs", "-f", "json"], "delta": "0:00:00.717096", "end": "2026-02-20 09:39:58.765804", "msg": "", "rc": 0, "start": "2026-02-20 09:39:58.048708", "stderr": "", "stderr_lines": [], "stdout": "\n{\"clients\": [{\"clients\": 0, \"fs\": \"cephfs\"}], \"mds_version\": [{\"daemon\": [\"mds.np0005625201.iogohb\", \"mds.np0005625199.qrnucj\", \"mds.np0005625202.akhmop\", \"mds.np0005625200.jsqxyw\", \"mds.np0005625204.wnsphl\", \"mds.np0005625203.zsrwgk\"], \"version\": \"ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable)\"}], \"mdsmap\": [{\"caps\": 0, \"dirs\": 12, \"dns\": 10, \"inos\": 13, \"name\": \"mds.np0005625201.iogohb\", \"rank\": 0, \"rate\": 0, \"state\": \"active\"}, {\"name\": \"mds.np0005625199.qrnucj\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625202.akhmop\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625200.jsqxyw\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625204.wnsphl\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625203.zsrwgk\", \"state\": \"standby\"}], \"pools\": [{\"avail\": 14045934592, \"id\": 7, \"name\": \"manila_metadata\", \"type\": \"metadata\", \"used\": 98304}, {\"avail\": 14045934592, \"id\": 6, \"name\": \"manila_data\", \"type\": \"data\", \"used\": 0}]}", "stdout_lines": ["", "{\"clients\": [{\"clients\": 0, \"fs\": \"cephfs\"}], \"mds_version\": [{\"daemon\": [\"mds.np0005625201.iogohb\", \"mds.np0005625199.qrnucj\", \"mds.np0005625202.akhmop\", \"mds.np0005625200.jsqxyw\", \"mds.np0005625204.wnsphl\", \"mds.np0005625203.zsrwgk\"], \"version\": \"ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable)\"}], \"mdsmap\": [{\"caps\": 0, \"dirs\": 12, \"dns\": 10, \"inos\": 13, \"name\": \"mds.np0005625201.iogohb\", \"rank\": 0, \"rate\": 0, \"state\": \"active\"}, {\"name\": \"mds.np0005625199.qrnucj\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625202.akhmop\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625200.jsqxyw\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625204.wnsphl\", \"state\": \"standby\"}, {\"name\": \"mds.np0005625203.zsrwgk\", \"state\": \"standby\"}], \"pools\": [{\"avail\": 14045934592, \"id\": 7, \"name\": \"manila_metadata\", \"type\": \"metadata\", \"used\": 98304}, {\"avail\": 14045934592, \"id\": 6, \"name\": \"manila_data\", \"type\": \"data\", \"used\": 0}]}"]} TASK [ceph_migrate : Get MDS Daemons] ****************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mds_daemons": {"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["mds.np0005625201.iogohb", "mds.np0005625199.qrnucj", "mds.np0005625202.akhmop", "mds.np0005625200.jsqxyw", "mds.np0005625204.wnsphl", "mds.np0005625203.zsrwgk"], "version": "ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "mds.np0005625201.iogohb", "rank": 0, "rate": 0, "state": "active"}, {"name": "mds.np0005625199.qrnucj", "state": "standby"}, {"name": "mds.np0005625202.akhmop", "state": "standby"}, {"name": "mds.np0005625200.jsqxyw", "state": "standby"}, {"name": "mds.np0005625204.wnsphl", "state": "standby"}, {"name": "mds.np0005625203.zsrwgk", "state": "standby"}], "pools": [{"avail": 14045934592, "id": 7, "name": "manila_metadata", "type": "metadata", "used": 98304}, {"avail": 14045934592, "id": 6, "name": "manila_data", "type": "data", "used": 0}]}}, "changed": false} TASK [ceph_migrate : Print Daemons] ******************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Get MDS daemons that are not part of decomm nodes] ******** skipping: [np0005625199.localdomain] => (item={'caps': 0, 'dirs': 12, 'dns': 10, 'inos': 13, 'name': 'mds.np0005625201.iogohb', 'rank': 0, 'rate': 0, 'state': 'active'}) => {"ansible_loop_var": "item", "changed": false, "false_condition": "item.state == \"standby\"", "item": {"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "mds.np0005625201.iogohb", "rank": 0, "rate": 0, "state": "active"}, "skip_reason": "Conditional result was False"} ok: [np0005625199.localdomain] => (item={'name': 'mds.np0005625199.qrnucj', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005625199.qrnucj", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005625199.qrnucj", "state": "standby"}} ok: [np0005625199.localdomain] => (item={'name': 'mds.np0005625202.akhmop', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005625202.akhmop", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005625202.akhmop", "state": "standby"}} ok: [np0005625199.localdomain] => (item={'name': 'mds.np0005625200.jsqxyw', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005625200.jsqxyw", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005625200.jsqxyw", "state": "standby"}} ok: [np0005625199.localdomain] => (item={'name': 'mds.np0005625204.wnsphl', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005625204.wnsphl", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005625204.wnsphl", "state": "standby"}} ok: [np0005625199.localdomain] => (item={'name': 'mds.np0005625203.zsrwgk', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005625203.zsrwgk", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005625203.zsrwgk", "state": "standby"}} TASK [ceph_migrate : Affinity daemon selected] ********************************* ok: [np0005625199.localdomain] => { "msg": { "name": "mds.np0005625203.zsrwgk", "state": "standby" } } TASK [ceph_migrate : Set MDS affinity] ***************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring config set mds.np0005625203.zsrwgk mds_join_fs cephfs", "delta": "0:00:00.701689", "end": "2026-02-20 09:40:00.481245", "msg": "", "rc": 0, "start": "2026-02-20 09:39:59.779556", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625199.localdomain", "mds"], "delta": "0:00:00.766795", "end": "2026-02-20 09:40:01.899688", "item": ["np0005625199.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:01.132893", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005625199.localdomain", "stdout_lines": ["Removed label mds from host np0005625199.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625200.localdomain", "mds"], "delta": "0:00:00.761511", "end": "2026-02-20 09:40:03.206269", "item": ["np0005625200.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:02.444758", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005625200.localdomain", "stdout_lines": ["Removed label mds from host np0005625200.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625201.localdomain", "mds"], "delta": "0:00:00.702423", "end": "2026-02-20 09:40:04.484930", "item": ["np0005625201.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:03.782507", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005625201.localdomain", "stdout_lines": ["Removed label mds from host np0005625201.localdomain"]} TASK [ceph_migrate : Wait daemons] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mds] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mds", "-f", "json"], "delta": "0:00:00.721291", "end": "2026-02-20 09:40:05.967525", "msg": "", "rc": 0, "start": "2026-02-20 09:40:05.246234", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"a1d600c62f28\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.05%\", \"created\": \"2026-02-20T07:58:30.356709Z\", \"daemon_id\": \"mds.np0005625199.qrnucj\", \"daemon_name\": \"mds.mds.np0005625199.qrnucj\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T07:58:30.457809Z daemon:mds.mds.np0005625199.qrnucj [INFO] \\\"Deployed mds.mds.np0005625199.qrnucj on host 'np0005625199.localdomain'\\\"\"], \"hostname\": \"np0005625199.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:32:13.508864Z\", \"memory_usage\": 26298286, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T07:58:30.242462Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"e60a86ccbbca\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2026-02-20T07:58:32.690448Z\", \"daemon_id\": \"mds.np0005625200.jsqxyw\", \"daemon_name\": \"mds.mds.np0005625200.jsqxyw\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T07:58:32.774624Z daemon:mds.mds.np0005625200.jsqxyw [INFO] \\\"Deployed mds.mds.np0005625200.jsqxyw on host 'np0005625200.localdomain'\\\"\"], \"hostname\": \"np0005625200.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:32:13.569330Z\", \"memory_usage\": 28447866, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T07:58:32.563523Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"9c008b234bf9\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.16%\", \"created\": \"2026-02-20T07:58:28.192774Z\", \"daemon_id\": \"mds.np0005625201.iogohb\", \"daemon_name\": \"mds.mds.np0005625201.iogohb\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T07:58:28.271216Z daemon:mds.mds.np0005625201.iogohb [INFO] \\\"Deployed mds.mds.np0005625201.iogohb on host 'np0005625201.localdomain'\\\"\"], \"hostname\": \"np0005625201.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-20T09:32:12.732137Z\", \"memory_usage\": 27661434, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T07:58:28.070638Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"f72a43f164c5\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"5.67%\", \"created\": \"2026-02-20T09:39:34.156646Z\", \"daemon_id\": \"mds.np0005625202.akhmop\", \"daemon_name\": \"mds.mds.np0005625202.akhmop\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T09:39:34.236754Z daemon:mds.mds.np0005625202.akhmop [INFO] \\\"Deployed mds.mds.np0005625202.akhmop on host 'np0005625202.localdomain'\\\"\"], \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:39:36.287548Z\", \"memory_usage\": 15969812, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T09:39:34.056114Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"637cfe271116\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.38%\", \"created\": \"2026-02-20T09:39:31.897686Z\", \"daemon_id\": \"mds.np0005625203.zsrwgk\", \"daemon_name\": \"mds.mds.np0005625203.zsrwgk\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T09:39:31.962809Z daemon:mds.mds.np0005625203.zsrwgk [INFO] \\\"Deployed mds.mds.np0005625203.zsrwgk on host 'np0005625203.localdomain'\\\"\"], \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:39:36.453844Z\", \"memory_usage\": 16609443, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T09:39:31.792957Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"f489614c7976\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.32%\", \"created\": \"2026-02-20T09:39:29.485155Z\", \"daemon_id\": \"mds.np0005625204.wnsphl\", \"daemon_name\": \"mds.mds.np0005625204.wnsphl\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T09:39:29.567027Z daemon:mds.mds.np0005625204.wnsphl [INFO] \\\"Deployed mds.mds.np0005625204.wnsphl on host 'np0005625204.localdomain'\\\"\"], \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:39:36.247171Z\", \"memory_usage\": 14323548, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T09:39:29.387471Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"a1d600c62f28\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.05%\", \"created\": \"2026-02-20T07:58:30.356709Z\", \"daemon_id\": \"mds.np0005625199.qrnucj\", \"daemon_name\": \"mds.mds.np0005625199.qrnucj\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T07:58:30.457809Z daemon:mds.mds.np0005625199.qrnucj [INFO] \\\"Deployed mds.mds.np0005625199.qrnucj on host 'np0005625199.localdomain'\\\"\"], \"hostname\": \"np0005625199.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:32:13.508864Z\", \"memory_usage\": 26298286, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T07:58:30.242462Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"e60a86ccbbca\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2026-02-20T07:58:32.690448Z\", \"daemon_id\": \"mds.np0005625200.jsqxyw\", \"daemon_name\": \"mds.mds.np0005625200.jsqxyw\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T07:58:32.774624Z daemon:mds.mds.np0005625200.jsqxyw [INFO] \\\"Deployed mds.mds.np0005625200.jsqxyw on host 'np0005625200.localdomain'\\\"\"], \"hostname\": \"np0005625200.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:32:13.569330Z\", \"memory_usage\": 28447866, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T07:58:32.563523Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"9c008b234bf9\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.16%\", \"created\": \"2026-02-20T07:58:28.192774Z\", \"daemon_id\": \"mds.np0005625201.iogohb\", \"daemon_name\": \"mds.mds.np0005625201.iogohb\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T07:58:28.271216Z daemon:mds.mds.np0005625201.iogohb [INFO] \\\"Deployed mds.mds.np0005625201.iogohb on host 'np0005625201.localdomain'\\\"\"], \"hostname\": \"np0005625201.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-20T09:32:12.732137Z\", \"memory_usage\": 27661434, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T07:58:28.070638Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"f72a43f164c5\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"5.67%\", \"created\": \"2026-02-20T09:39:34.156646Z\", \"daemon_id\": \"mds.np0005625202.akhmop\", \"daemon_name\": \"mds.mds.np0005625202.akhmop\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T09:39:34.236754Z daemon:mds.mds.np0005625202.akhmop [INFO] \\\"Deployed mds.mds.np0005625202.akhmop on host 'np0005625202.localdomain'\\\"\"], \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:39:36.287548Z\", \"memory_usage\": 15969812, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T09:39:34.056114Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"637cfe271116\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.38%\", \"created\": \"2026-02-20T09:39:31.897686Z\", \"daemon_id\": \"mds.np0005625203.zsrwgk\", \"daemon_name\": \"mds.mds.np0005625203.zsrwgk\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T09:39:31.962809Z daemon:mds.mds.np0005625203.zsrwgk [INFO] \\\"Deployed mds.mds.np0005625203.zsrwgk on host 'np0005625203.localdomain'\\\"\"], \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:39:36.453844Z\", \"memory_usage\": 16609443, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T09:39:31.792957Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"f489614c7976\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.32%\", \"created\": \"2026-02-20T09:39:29.485155Z\", \"daemon_id\": \"mds.np0005625204.wnsphl\", \"daemon_name\": \"mds.mds.np0005625204.wnsphl\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-20T09:39:29.567027Z daemon:mds.mds.np0005625204.wnsphl [INFO] \\\"Deployed mds.mds.np0005625204.wnsphl on host 'np0005625204.localdomain'\\\"\"], \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:39:36.247171Z\", \"memory_usage\": 14323548, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-20T09:39:29.387471Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Sleep before moving to the next phase] ******************** Pausing for 30 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2026-02-20 09:40:06.190251", "stderr": "", "stdout": "Paused for 30.01 seconds", "stop": "2026-02-20 09:40:36.203431", "user_input": ""} TASK [ceph_migrate : Get ceph_cli] ********************************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if RGW VIPs are not defined] ************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => (item=['np0005625202.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005625202.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=['np0005625203.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005625203.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => (item=['np0005625204.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005625204.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005625199.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : RGW - Load Spec from the orchestrator] ******************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the loaded data] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Apply ceph rgw keystone config] *************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Update the RGW spec definition] *************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Create the Ingress Daemon spec definition for RGW] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Wait for cephadm to redeploy] ***************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : RGW - wait daemons] *************************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Setup a Ceph client to the first node] ******************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_client.yaml for np0005625199.localdomain TASK [ceph_migrate : TMP_CLIENT - Patch os-net-config config and setup a tmp client IP] *** changed: [np0005625199.localdomain] => {"backup": "/etc/os-net-config/tripleo_config.yaml.473372.2026-02-20@09:40:37~", "changed": true, "msg": "line added and ownership, perms or SE linux context changed"} TASK [ceph_migrate : TMP_CLIENT - Refresh os-net-config] *********************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["os-net-config", "-c", "/etc/os-net-config/tripleo_config.yaml"], "delta": "0:00:07.318787", "end": "2026-02-20 09:40:45.371479", "msg": "", "rc": 0, "start": "2026-02-20 09:40:38.052692", "stderr": "", "stderr_lines": [], "stdout": "2026-02-20 09:40:38.945 ERROR os_net_config.execute stderr : WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.\nWARN : [ifdown] 'network-scripts' will be removed from distribution in near future.\nWARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.\n\n2026-02-20 09:40:45.307 ERROR os_net_config.execute stderr : WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated.\nWARN : [ifup] 'network-scripts' will be removed from distribution in near future.\nWARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.", "stdout_lines": ["2026-02-20 09:40:38.945 ERROR os_net_config.execute stderr : WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.", "WARN : [ifdown] 'network-scripts' will be removed from distribution in near future.", "WARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.", "", "2026-02-20 09:40:45.307 ERROR os_net_config.execute stderr : WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated.", "WARN : [ifup] 'network-scripts' will be removed from distribution in near future.", "WARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well."]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005625199.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005625199.localdomain] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005625199.localdomain] => {"changed": false, "examined": 4, "files": [{"atime": 1771574273.374706, "ctime": 1771574272.3586786, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384846, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771573198.116001, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771574273.3857064, "ctime": 1771574272.3586786, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384845, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771573058.1703396, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771574314.1088104, "ctime": 1771574312.168758, "dev": 64516, "gid": 167, "gr_name": "", "inode": 142741512, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771574311.8547492, "nlink": 1, "path": "/etc/ceph/ceph.client.openstack.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771574315.2228405, "ctime": 1771574313.0257812, "dev": 64516, "gid": 167, "gr_name": "", "inode": 167853570, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771574312.7447734, "nlink": 1, "path": "/etc/ceph/ceph.client.manila.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 4, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005625199.localdomain] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 271, 'inode': 159384846, 'dev': 64516, 'nlink': 1, 'atime': 1771574273.374706, 'mtime': 1771573198.116001, 'ctime': 1771574272.3586786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "936d449f31af670125791fe297b02d275b2ba4b7", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1771574273.374706, "ctime": 1771574272.3586786, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384846, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771573198.116001, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "13ba1621d325cdd2060a2f036a0f7e60", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 271, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005625199.localdomain] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 159384845, 'dev': 64516, 'nlink': 1, 'atime': 1771574273.3857064, 'mtime': 1771573058.1703396, 'ctime': 1771574272.3586786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "5d0ac48a08c9259d9227b20d376d9a209a188f22", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1771574273.3857064, "ctime": 1771574272.3586786, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384845, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771573058.1703396, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "5fa07c3c2dd6bc5b069351fa5b7406dc", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} changed: [np0005625199.localdomain] => (item={'path': '/etc/ceph/ceph.client.openstack.keyring', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 167, 'gid': 167, 'size': 231, 'inode': 142741512, 'dev': 64516, 'nlink': 1, 'atime': 1771574314.1088104, 'mtime': 1771574311.8547492, 'ctime': 1771574312.168758, 'gr_name': '', 'pw_name': '', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "8e2004121a34320613d32710ae37702da8d027e6", "dest": "/home/tripleo-admin/ceph_client/ceph.client.openstack.keyring", "gid": 0, "group": "root", "item": {"atime": 1771574314.1088104, "ctime": 1771574312.168758, "dev": 64516, "gid": 167, "gr_name": "", "inode": 142741512, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771574311.8547492, "nlink": 1, "path": "/etc/ceph/ceph.client.openstack.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "00d8f4e97c7f3f6294d844b671a8278f", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 231, "src": "/etc/ceph/ceph.client.openstack.keyring", "state": "file", "uid": 0} changed: [np0005625199.localdomain] => (item={'path': '/etc/ceph/ceph.client.manila.keyring', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 167, 'gid': 167, 'size': 153, 'inode': 167853570, 'dev': 64516, 'nlink': 1, 'atime': 1771574315.2228405, 'mtime': 1771574312.7447734, 'ctime': 1771574313.0257812, 'gr_name': '', 'pw_name': '', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "417007d20895a54571330144b727b714177f3d13", "dest": "/home/tripleo-admin/ceph_client/ceph.client.manila.keyring", "gid": 0, "group": "root", "item": {"atime": 1771574315.2228405, "ctime": 1771574313.0257812, "dev": 64516, "gid": 167, "gr_name": "", "inode": 167853570, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771574312.7447734, "nlink": 1, "path": "/etc/ceph/ceph.client.manila.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "daad976ba5cb9a4c9a290642c8abec35", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 153, "src": "/etc/ceph/ceph.client.manila.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : Render global ceph.conf] ********************************** changed: [np0005625199.localdomain] => {"changed": true, "checksum": "ddbe7288c21bf3a53467024ece3d83cdd614ede0", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "md5sum": "e3d4aebd9e2ff146607bc253a0a5bd18", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 142, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580449.8322406-63136-199582857846803/source", "state": "file", "uid": 0} TASK [ceph_migrate : MGR - Migrate RBD node] *********************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mgr.yaml for np0005625199.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MGR - Setup Mon/Mgr label to the target node] ************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005625199.localdomain TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005625199.localdomain] => (item=['np0005625202.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625202.localdomain", "mgr"], "delta": "0:00:00.739137", "end": "2026-02-20 09:40:52.572170", "item": ["np0005625202.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:51.833033", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005625202.localdomain", "stdout_lines": ["Added label mgr to host np0005625202.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625203.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625203.localdomain", "mgr"], "delta": "0:00:00.610047", "end": "2026-02-20 09:40:53.716097", "item": ["np0005625203.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:53.106050", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005625203.localdomain", "stdout_lines": ["Added label mgr to host np0005625203.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625204.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625204.localdomain", "mgr"], "delta": "0:00:00.667594", "end": "2026-02-20 09:40:54.992975", "item": ["np0005625204.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:54.325381", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005625204.localdomain", "stdout_lines": ["Added label mgr to host np0005625204.localdomain"]} TASK [ceph_migrate : MGR - Load Spec from the orchestrator] ******************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mgr_spec": {"service_name": "mgr", "service_type": "mgr", "spec": {}}}, "changed": false} TASK [ceph_migrate : Update the MGR Daemon spec definition] ******************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mgr:/home/tripleo-admin/mgr:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mgr"], "delta": "0:00:00.749497", "end": "2026-02-20 09:40:56.472081", "rc": 0, "start": "2026-02-20 09:40:55.722584", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mgr update...", "stdout_lines": ["Scheduled mgr update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MGR - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mgr] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mgr", "-f", "json"], "delta": "0:00:00.651979", "end": "2026-02-20 09:40:57.938134", "msg": "", "rc": 0, "start": "2026-02-20 09:40:57.286155", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"466432c2dc65\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.49%\", \"created\": \"2026-02-20T07:37:02.737382Z\", \"daemon_id\": \"np0005625199.ileebh\", \"daemon_name\": \"mgr.np0005625199.ileebh\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-20T07:40:01.202747Z daemon:mgr.np0005625199.ileebh [INFO] \\\"Reconfigured mgr.np0005625199.ileebh on host 'np0005625199.localdomain'\\\"\"], \"hostname\": \"np0005625199.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-20T09:40:10.375957Z\", \"memory_usage\": 539597209, \"ports\": [9283, 8765], \"service_name\": \"mgr\", \"started\": \"2026-02-20T07:37:02.592749Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"d1fb9d82789f\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-20T07:39:16.741134Z\", \"daemon_id\": \"np0005625200.ypbkax\", \"daemon_name\": \"mgr.np0005625200.ypbkax\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-20T07:39:16.806167Z daemon:mgr.np0005625200.ypbkax [INFO] \\\"Deployed mgr.np0005625200.ypbkax on host 'np0005625200.localdomain'\\\"\"], \"hostname\": \"np0005625200.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:40:24.404916Z\", \"memory_usage\": 472278630, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-20T07:39:16.609322Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"c380dfcfcdce\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-20T07:39:10.282918Z\", \"daemon_id\": \"np0005625201.mtnyvu\", \"daemon_name\": \"mgr.np0005625201.mtnyvu\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-20T07:39:14.623207Z daemon:mgr.np0005625201.mtnyvu [INFO] \\\"Deployed mgr.np0005625201.mtnyvu on host 'np0005625201.localdomain'\\\"\", \"2026-02-20T07:40:05.460628Z daemon:mgr.np0005625201.mtnyvu [INFO] \\\"Reconfigured mgr.np0005625201.mtnyvu on host 'np0005625201.localdomain'\\\"\"], \"hostname\": \"np0005625201.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:40:24.454966Z\", \"memory_usage\": 472593203, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-20T07:39:10.136296Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"466432c2dc65\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.49%\", \"created\": \"2026-02-20T07:37:02.737382Z\", \"daemon_id\": \"np0005625199.ileebh\", \"daemon_name\": \"mgr.np0005625199.ileebh\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-20T07:40:01.202747Z daemon:mgr.np0005625199.ileebh [INFO] \\\"Reconfigured mgr.np0005625199.ileebh on host 'np0005625199.localdomain'\\\"\"], \"hostname\": \"np0005625199.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-20T09:40:10.375957Z\", \"memory_usage\": 539597209, \"ports\": [9283, 8765], \"service_name\": \"mgr\", \"started\": \"2026-02-20T07:37:02.592749Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"d1fb9d82789f\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-20T07:39:16.741134Z\", \"daemon_id\": \"np0005625200.ypbkax\", \"daemon_name\": \"mgr.np0005625200.ypbkax\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-20T07:39:16.806167Z daemon:mgr.np0005625200.ypbkax [INFO] \\\"Deployed mgr.np0005625200.ypbkax on host 'np0005625200.localdomain'\\\"\"], \"hostname\": \"np0005625200.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:40:24.404916Z\", \"memory_usage\": 472278630, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-20T07:39:16.609322Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"c380dfcfcdce\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-20T07:39:10.282918Z\", \"daemon_id\": \"np0005625201.mtnyvu\", \"daemon_name\": \"mgr.np0005625201.mtnyvu\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-20T07:39:14.623207Z daemon:mgr.np0005625201.mtnyvu [INFO] \\\"Deployed mgr.np0005625201.mtnyvu on host 'np0005625201.localdomain'\\\"\", \"2026-02-20T07:40:05.460628Z daemon:mgr.np0005625201.mtnyvu [INFO] \\\"Reconfigured mgr.np0005625201.mtnyvu on host 'np0005625201.localdomain'\\\"\"], \"hostname\": \"np0005625201.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:40:24.454966Z\", \"memory_usage\": 472593203, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-20T07:39:10.136296Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Load Spec from the orchestrator] ******************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mon_spec": {"service_name": "mon", "service_type": "mon", "spec": {}}}, "changed": false} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625199.localdomain", "mon"], "delta": "0:00:00.750493", "end": "2026-02-20 09:40:59.580353", "item": ["np0005625199.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:40:58.829860", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005625199.localdomain", "stdout_lines": ["Added label mon to host np0005625199.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625199.localdomain", "_admin"], "delta": "0:00:00.672304", "end": "2026-02-20 09:41:00.809079", "item": ["np0005625199.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:00.136775", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005625199.localdomain", "stdout_lines": ["Added label _admin to host np0005625199.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625200.localdomain", "mon"], "delta": "0:00:00.687677", "end": "2026-02-20 09:41:02.008469", "item": ["np0005625200.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:01.320792", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005625200.localdomain", "stdout_lines": ["Added label mon to host np0005625200.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625200.localdomain", "_admin"], "delta": "0:00:00.652117", "end": "2026-02-20 09:41:03.245294", "item": ["np0005625200.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:02.593177", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005625200.localdomain", "stdout_lines": ["Added label _admin to host np0005625200.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625201.localdomain", "mon"], "delta": "0:00:00.710999", "end": "2026-02-20 09:41:04.505043", "item": ["np0005625201.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:03.794044", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005625201.localdomain", "stdout_lines": ["Added label mon to host np0005625201.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625201.localdomain", "_admin"], "delta": "0:00:00.708994", "end": "2026-02-20 09:41:05.802934", "item": ["np0005625201.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:05.093940", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005625201.localdomain", "stdout_lines": ["Added label _admin to host np0005625201.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625202.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625202.localdomain", "mon"], "delta": "0:00:00.716202", "end": "2026-02-20 09:41:07.112689", "item": ["np0005625202.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:06.396487", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005625202.localdomain", "stdout_lines": ["Added label mon to host np0005625202.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625202.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625202.localdomain", "_admin"], "delta": "0:00:00.720611", "end": "2026-02-20 09:41:08.455754", "item": ["np0005625202.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:07.735143", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005625202.localdomain", "stdout_lines": ["Added label _admin to host np0005625202.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625203.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625203.localdomain", "mon"], "delta": "0:00:00.690344", "end": "2026-02-20 09:41:09.757596", "item": ["np0005625203.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:09.067252", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005625203.localdomain", "stdout_lines": ["Added label mon to host np0005625203.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625203.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625203.localdomain", "_admin"], "delta": "0:00:00.731804", "end": "2026-02-20 09:41:11.095671", "item": ["np0005625203.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:10.363867", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005625203.localdomain", "stdout_lines": ["Added label _admin to host np0005625203.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625204.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625204.localdomain", "mon"], "delta": "0:00:00.643891", "end": "2026-02-20 09:41:12.367863", "item": ["np0005625204.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:11.723972", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005625204.localdomain", "stdout_lines": ["Added label mon to host np0005625204.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625204.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005625204.localdomain", "_admin"], "delta": "0:00:00.771813", "end": "2026-02-20 09:41:13.708187", "item": ["np0005625204.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:41:12.936374", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005625204.localdomain", "stdout_lines": ["Added label _admin to host np0005625204.localdomain"]} TASK [ceph_migrate : Normalize the mon spec to use labels] ********************* ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.822170", "end": "2026-02-20 09:41:15.225498", "rc": 0, "start": "2026-02-20 09:41:14.403328", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : RBD - wait new daemons to be available] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain => (item=np0005625202.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain => (item=np0005625203.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain => (item=np0005625204.localdomain) TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* FAILED - RETRYING: [np0005625199.localdomain]: wait for mon (200 retries left). FAILED - RETRYING: [np0005625199.localdomain]: wait for mon (199 retries left). changed: [np0005625199.localdomain] => {"attempts": 3, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625202", "-f", "json"], "delta": "0:00:06.737248", "end": "2026-02-20 09:41:38.345713", "msg": "", "rc": 0, "start": "2026-02-20 09:41:31.608465", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"29aac7b9c05c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.73%\", \"created\": \"2026-02-20T09:41:28.210650Z\", \"daemon_id\": \"np0005625202\", \"daemon_name\": \"mon.np0005625202\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:41:32.216478Z daemon:mon.np0005625202 [INFO] \\\"Deployed mon.np0005625202 on host 'np0005625202.localdomain'\\\"\"], \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:35.132108Z\", \"memory_request\": 2147483648, \"memory_usage\": 43274731, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:28.093701Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"29aac7b9c05c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.73%\", \"created\": \"2026-02-20T09:41:28.210650Z\", \"daemon_id\": \"np0005625202\", \"daemon_name\": \"mon.np0005625202\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:41:32.216478Z daemon:mon.np0005625202 [INFO] \\\"Deployed mon.np0005625202 on host 'np0005625202.localdomain'\\\"\"], \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:35.132108Z\", \"memory_request\": 2147483648, \"memory_usage\": 43274731, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:28.093701Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625203", "-f", "json"], "delta": "0:00:00.705565", "end": "2026-02-20 09:41:39.671077", "msg": "", "rc": 0, "start": "2026-02-20 09:41:38.965512", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"2e1f82df8ee4\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.05%\", \"created\": \"2026-02-20T09:41:22.759079Z\", \"daemon_id\": \"np0005625203\", \"daemon_name\": \"mon.np0005625203\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:41:25.554798Z daemon:mon.np0005625203 [INFO] \\\"Deployed mon.np0005625203 on host 'np0005625203.localdomain'\\\"\"], \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:35.130052Z\", \"memory_request\": 2147483648, \"memory_usage\": 44595937, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:22.644707Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"2e1f82df8ee4\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.05%\", \"created\": \"2026-02-20T09:41:22.759079Z\", \"daemon_id\": \"np0005625203\", \"daemon_name\": \"mon.np0005625203\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:41:25.554798Z daemon:mon.np0005625203 [INFO] \\\"Deployed mon.np0005625203 on host 'np0005625203.localdomain'\\\"\"], \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:35.130052Z\", \"memory_request\": 2147483648, \"memory_usage\": 44595937, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:22.644707Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625204", "-f", "json"], "delta": "0:00:00.732112", "end": "2026-02-20 09:41:41.161954", "msg": "", "rc": 0, "start": "2026-02-20 09:41:40.429842", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"4047f9576a63\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.24%\", \"created\": \"2026-02-20T09:41:20.076555Z\", \"daemon_id\": \"np0005625204\", \"daemon_name\": \"mon.np0005625204\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:41:20.155702Z daemon:mon.np0005625204 [INFO] \\\"Deployed mon.np0005625204 on host 'np0005625204.localdomain'\\\"\"], \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:35.113586Z\", \"memory_request\": 2147483648, \"memory_usage\": 47248834, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:19.985631Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"4047f9576a63\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.24%\", \"created\": \"2026-02-20T09:41:20.076555Z\", \"daemon_id\": \"np0005625204\", \"daemon_name\": \"mon.np0005625204\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:41:20.155702Z daemon:mon.np0005625204 [INFO] \\\"Deployed mon.np0005625204 on host 'np0005625204.localdomain'\\\"\"], \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:35.113586Z\", \"memory_request\": 2147483648, \"memory_usage\": 47248834, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:19.985631Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Migrate RBD node] *********************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005625199.localdomain => (item=['np0005625199.localdomain', 'np0005625202.localdomain']) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005625199.localdomain => (item=['np0005625200.localdomain', 'np0005625203.localdomain']) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005625199.localdomain => (item=['np0005625201.localdomain', 'np0005625204.localdomain']) TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005625199.localdomain] => { "msg": "Migrate mon: np0005625199.localdomain to node: np0005625202.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005625199.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.103"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.702382", "end": "2026-02-20 09:41:42.912377", "msg": "", "rc": 0, "start": "2026-02-20 09:41:42.209995", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":24,\"quorum\":[0,1,2,3,4,5],\"quorum_names\":[\"np0005625199\",\"np0005625201\",\"np0005625200\",\"np0005625204\",\"np0005625203\",\"np0005625202\"],\"quorum_age\":5,\"monmap\":{\"epoch\":6,\"min_mon_release_name\":\"reef\",\"num_mons\":6},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":588795904,\"bytes_avail\":44483194880,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":75,\"modified\":\"2026-02-20T09:41:18.348752+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005625202.arwxwo\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005625203.lonygy\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005625204.exgrzx\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":24,\"quorum\":[0,1,2,3,4,5],\"quorum_names\":[\"np0005625199\",\"np0005625201\",\"np0005625200\",\"np0005625204\",\"np0005625203\",\"np0005625202\"],\"quorum_age\":5,\"monmap\":{\"epoch\":6,\"min_mon_release_name\":\"reef\",\"num_mons\":6},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":588795904,\"bytes_avail\":44483194880,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":75,\"modified\":\"2026-02-20T09:41:18.348752+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005625202.arwxwo\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005625203.lonygy\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005625204.exgrzx\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "cur_mon != client_node", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.732893", "end": "2026-02-20 09:41:44.328840", "msg": "", "rc": 0, "start": "2026-02-20 09:41:43.595947", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":14,\"available\":true,\"active_name\":\"np0005625199.ileebh\",\"num_standby\":5}", "stdout_lines": ["", "{\"epoch\":14,\"available\":true,\"active_name\":\"np0005625199.ileebh\",\"num_standby\":5}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005625199.ileebh", "available": true, "epoch": 14, "num_standby": 5}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005625199.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.934227", "end": "2026-02-20 09:41:46.217158", "msg": "", "rc": 0, "start": "2026-02-20 09:41:45.282931", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:41:46.365321", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:41:56.375027", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005625199.localdomain: jid=j815065541865.480486 changed: [np0005625199.localdomain] => {"ansible_job_id": "j815065541865.480486", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.738366", "end": "2026-02-20 09:41:58.016158", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j815065541865.480486", "start": "2026-02-20 09:41:57.277792", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005625199.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625199", "-f", "json"], "delta": "0:00:00.622687", "end": "2026-02-20 09:42:00.841949", "msg": "", "rc": 0, "start": "2026-02-20 09:42:00.219262", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"1909e087bf4d\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.76%\", \"created\": \"2026-02-20T07:36:53.348137Z\", \"daemon_id\": \"np0005625199\", \"daemon_name\": \"mon.np0005625199\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625199.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:48.180730Z\", \"memory_request\": 2147483648, \"memory_usage\": 147010355, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T07:37:00.996017Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"1909e087bf4d\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.76%\", \"created\": \"2026-02-20T07:36:53.348137Z\", \"daemon_id\": \"np0005625199\", \"daemon_name\": \"mon.np0005625199\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625199.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:48.180730Z\", \"memory_request\": 2147483648, \"memory_usage\": 147010355, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T07:37:00.996017Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005625199", "--force"], "delta": "0:00:02.523933", "end": "2026-02-20 09:42:04.075989", "msg": "", "rc": 0, "start": "2026-02-20 09:42:01.552056", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005625199 from host 'np0005625199.localdomain'", "stdout_lines": ["Removed mon.np0005625199 from host 'np0005625199.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005625199.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625199.localdomain", "mon"], "delta": "0:00:00.749753", "end": "2026-02-20 09:42:05.671901", "item": ["np0005625199.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:42:04.922148", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005625199.localdomain", "stdout_lines": ["Removed label mon from host np0005625199.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625199.localdomain", "mgr"], "delta": "0:00:00.754647", "end": "2026-02-20 09:42:07.010185", "item": ["np0005625199.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-20 09:42:06.255538", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005625199.localdomain", "stdout_lines": ["Removed label mgr from host np0005625199.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625199.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625199.localdomain", "_admin"], "delta": "0:00:00.773322", "end": "2026-02-20 09:42:08.404417", "item": ["np0005625199.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:42:07.631095", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005625199.localdomain", "stdout_lines": ["Removed label _admin from host np0005625199.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:42:08.544951", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:42:18.553912", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005625199.localdomain"], "delta": "0:00:00.780365", "end": "2026-02-20 09:42:19.924557", "msg": "", "rc": 0, "start": "2026-02-20 09:42:19.144192", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005625199.localdomain'\ntype id \n-------------------- ---------------\nmgr np0005625199.ileebh\ncrash np0005625199 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005625199.localdomain'", "type id ", "-------------------- ---------------", "mgr np0005625199.ileebh", "crash np0005625199 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005625199.localdomain", "-f", "json"], "delta": "0:00:01.100450", "end": "2026-02-20 09:42:21.725932", "msg": "", "rc": 0, "start": "2026-02-20 09:42:20.625482", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005625199.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005625199.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005625199.localdomain", "--force"], "delta": "0:00:00.736410", "end": "2026-02-20 09:42:23.183468", "msg": "", "rc": 0, "start": "2026-02-20 09:42:22.447058", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005625199.localdomain'", "stdout_lines": ["Removed host 'np0005625199.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005625199.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005625199.localdomain] => {"backup": "/etc/os-net-config/tripleo_config.yaml.482971.2026-02-20@09:42:24~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.103/24", "dev", "vlan21"], "delta": "0:00:00.006544", "end": "2026-02-20 09:42:24.709219", "msg": "", "rc": 0, "start": "2026-02-20 09:42:24.702675", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005625199.localdomain -> np0005625202.localdomain(192.168.122.106)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.290997.2026-02-20@09:42:25~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005625199.localdomain -> np0005625202.localdomain(192.168.122.106)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.103/24", "dev", "vlan21"], "delta": "0:00:00.005632", "end": "2026-02-20 09:42:26.771162", "msg": "", "rc": 0, "start": "2026-02-20 09:42:26.765530", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005625199.localdomain -> np0005625202.localdomain(192.168.122.106)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.103"], "delta": "0:00:02.094776", "end": "2026-02-20 09:42:29.591132", "msg": "", "rc": 0, "start": "2026-02-20 09:42:27.496356", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.103 (172.18.0.103) 56(84) bytes of data.\n64 bytes from 172.18.0.103: icmp_seq=1 ttl=64 time=0.059 ms\n64 bytes from 172.18.0.103: icmp_seq=2 ttl=64 time=0.065 ms\n64 bytes from 172.18.0.103: icmp_seq=3 ttl=64 time=0.044 ms\n\n--- 172.18.0.103 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2087ms\nrtt min/avg/max/mdev = 0.044/0.056/0.065/0.008 ms", "stdout_lines": ["PING 172.18.0.103 (172.18.0.103) 56(84) bytes of data.", "64 bytes from 172.18.0.103: icmp_seq=1 ttl=64 time=0.059 ms", "64 bytes from 172.18.0.103: icmp_seq=2 ttl=64 time=0.065 ms", "64 bytes from 172.18.0.103: icmp_seq=3 ttl=64 time=0.044 ms", "", "--- 172.18.0.103 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2087ms", "rtt min/avg/max/mdev = 0.044/0.056/0.065/0.008 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.736622", "end": "2026-02-20 09:42:31.018047", "rc": 0, "start": "2026-02-20 09:42:30.281425", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625202", "-f", "json"], "delta": "0:00:00.694695", "end": "2026-02-20 09:42:32.412259", "msg": "", "rc": 0, "start": "2026-02-20 09:42:31.717564", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"29aac7b9c05c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.73%\", \"created\": \"2026-02-20T09:41:28.210650Z\", \"daemon_id\": \"np0005625202\", \"daemon_name\": \"mon.np0005625202\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:42:04.280450Z daemon:mon.np0005625202 [INFO] \\\"Reconfigured mon.np0005625202 on host 'np0005625202.localdomain'\\\"\"], \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:48.295704Z\", \"memory_request\": 2147483648, \"memory_usage\": 43148902, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:28.093701Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"29aac7b9c05c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.73%\", \"created\": \"2026-02-20T09:41:28.210650Z\", \"daemon_id\": \"np0005625202\", \"daemon_name\": \"mon.np0005625202\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:42:04.280450Z daemon:mon.np0005625202 [INFO] \\\"Reconfigured mon.np0005625202 on host 'np0005625202.localdomain'\\\"\"], \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:41:48.295704Z\", \"memory_request\": 2147483648, \"memory_usage\": 43148902, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:28.093701Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005625202", "--force"], "delta": "0:00:02.180841", "end": "2026-02-20 09:42:35.271462", "msg": "", "rc": 0, "start": "2026-02-20 09:42:33.090621", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005625202 from host 'np0005625202.localdomain'", "stdout_lines": ["Removed mon.np0005625202 from host 'np0005625202.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:42:35.421891", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:42:45.427884", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005625202.localdomain] *********** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005625202.localdomain] *********** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005625202.localdomain:172.18.0.103"], "delta": "0:00:06.581952", "end": "2026-02-20 09:42:52.553463", "msg": "", "rc": 0, "start": "2026-02-20 09:42:45.971511", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005625202 on host 'np0005625202.localdomain'", "stdout_lines": ["Deployed mon.np0005625202 on host 'np0005625202.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:42:52.711835", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:43:02.722930", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.756930", "end": "2026-02-20 09:43:04.054692", "msg": "", "rc": 0, "start": "2026-02-20 09:43:03.297762", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":38,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005625201\",\"np0005625200\",\"np0005625204\",\"np0005625203\",\"np0005625202\"],\"quorum_age\":5,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":85,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109572084,\"bytes_used\":588849152,\"bytes_avail\":44483141632,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-20T09:42:54.094747+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":38,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005625201\",\"np0005625200\",\"np0005625204\",\"np0005625203\",\"np0005625202\"],\"quorum_age\":5,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":85,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109572084,\"bytes_used\":588849152,\"bytes_avail\":44483141632,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-20T09:42:54.094747+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:01.010111", "end": "2026-02-20 09:43:05.739531", "msg": "", "rc": 0, "start": "2026-02-20 09:43:04.729420", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005625202.localdomain'\nScheduled to reconfig osd.5 on host 'np0005625202.localdomain'\nScheduled to reconfig osd.1 on host 'np0005625203.localdomain'\nScheduled to reconfig osd.4 on host 'np0005625203.localdomain'\nScheduled to reconfig osd.0 on host 'np0005625204.localdomain'\nScheduled to reconfig osd.3 on host 'np0005625204.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005625202.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005625202.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005625203.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005625203.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005625204.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005625204.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005625199.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.742146", "end": "2026-02-20 09:43:07.299736", "msg": "", "rc": 0, "start": "2026-02-20 09:43:06.557590", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:43:07.437748", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:43:17.443536", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j373838324895.484573 started=1 finished=0 ASYNC OK on np0005625199.localdomain: jid=j373838324895.484573 changed: [np0005625199.localdomain] => {"ansible_job_id": "j373838324895.484573", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:22.969166", "end": "2026-02-20 09:43:41.254400", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j373838324895.484573", "start": "2026-02-20 09:43:18.285234", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.748166", "end": "2026-02-20 09:43:44.391718", "rc": 0, "start": "2026-02-20 09:43:43.643552", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625202", "-f", "json"], "delta": "0:00:00.681692", "end": "2026-02-20 09:43:45.752895", "msg": "", "rc": 0, "start": "2026-02-20 09:43:45.071203", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"bd908c902c42\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.89%\", \"created\": \"2026-02-20T09:42:52.158877Z\", \"daemon_id\": \"np0005625202\", \"daemon_name\": \"mon.np0005625202\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:43:43.280846Z\", \"memory_request\": 2147483648, \"memory_usage\": 39436943, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:42:52.080933Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"bd908c902c42\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.89%\", \"created\": \"2026-02-20T09:42:52.158877Z\", \"daemon_id\": \"np0005625202\", \"daemon_name\": \"mon.np0005625202\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625202.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:43:43.280846Z\", \"memory_request\": 2147483648, \"memory_usage\": 39436943, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:42:52.080933Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005625199.localdomain] => { "msg": "Migrate mon: np0005625200.localdomain to node: np0005625203.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005625199.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.104"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.918579", "end": "2026-02-20 09:43:47.590913", "msg": "", "rc": 0, "start": "2026-02-20 09:43:46.672334", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":38,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005625201\",\"np0005625200\",\"np0005625204\",\"np0005625203\",\"np0005625202\"],\"quorum_age\":48,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":87,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":611680256,\"bytes_avail\":44460310528,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-20T09:42:54.094747+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":38,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005625201\",\"np0005625200\",\"np0005625204\",\"np0005625203\",\"np0005625202\"],\"quorum_age\":48,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":87,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":611680256,\"bytes_avail\":44460310528,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-20T09:42:54.094747+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005625199.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => {"changed": false, "examined": 2, "files": [{"atime": 1771580625.275024, "ctime": 1771580625.783036, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1216349539, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771580625.5150297, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 367, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771580626.6600573, "ctime": 1771580627.1750698, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1216349540, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771580626.9220636, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 2, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 367, 'inode': 1216349539, 'dev': 64516, 'nlink': 1, 'atime': 1771580625.275024, 'mtime': 1771580625.5150297, 'ctime': 1771580625.783036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "6c98c760b51975c4ea3bc248381077bfc8cd495c", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1771580625.275024, "ctime": 1771580625.783036, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1216349539, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771580625.5150297, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 367, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "585e2532132bc2d44544d19c50717e97", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 367, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 1216349540, 'dev': 64516, 'nlink': 1, 'atime': 1771580626.6600573, 'mtime': 1771580626.9220636, 'ctime': 1771580627.1750698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "5d0ac48a08c9259d9227b20d376d9a209a188f22", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1771580626.6600573, "ctime": 1771580627.1750698, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1216349540, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771580626.9220636, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "5fa07c3c2dd6bc5b069351fa5b7406dc", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.677216", "end": "2026-02-20 09:43:52.531098", "msg": "", "rc": 0, "start": "2026-02-20 09:43:51.853882", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":25,\"available\":true,\"active_name\":\"np0005625200.ypbkax\",\"num_standby\":4}", "stdout_lines": ["", "{\"epoch\":25,\"available\":true,\"active_name\":\"np0005625200.ypbkax\",\"num_standby\":4}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005625200.ypbkax", "available": true, "epoch": 25, "num_standby": 4}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005625199.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.956958", "end": "2026-02-20 09:43:54.413844", "msg": "", "rc": 0, "start": "2026-02-20 09:43:53.456886", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:43:54.564098", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:44:04.575682", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005625199.localdomain: jid=j955925714641.486909 changed: [np0005625199.localdomain] => {"ansible_job_id": "j955925714641.486909", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.682922", "end": "2026-02-20 09:44:06.004306", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j955925714641.486909", "start": "2026-02-20 09:44:05.321384", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005625199.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625200", "-f", "json"], "delta": "0:00:00.651221", "end": "2026-02-20 09:44:08.751256", "msg": "", "rc": 0, "start": "2026-02-20 09:44:08.100035", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"21c2fe7e78a3\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.15%\", \"created\": \"2026-02-20T07:39:05.291827Z\", \"daemon_id\": \"np0005625200\", \"daemon_name\": \"mon.np0005625200\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625200.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:43:56.269803Z\", \"memory_request\": 2147483648, \"memory_usage\": 141662617, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T07:39:05.153910Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"21c2fe7e78a3\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.15%\", \"created\": \"2026-02-20T07:39:05.291827Z\", \"daemon_id\": \"np0005625200\", \"daemon_name\": \"mon.np0005625200\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625200.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:43:56.269803Z\", \"memory_request\": 2147483648, \"memory_usage\": 141662617, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T07:39:05.153910Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005625200", "--force"], "delta": "0:00:04.013486", "end": "2026-02-20 09:44:13.636332", "msg": "", "rc": 0, "start": "2026-02-20 09:44:09.622846", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005625200 from host 'np0005625200.localdomain'", "stdout_lines": ["Removed mon.np0005625200 from host 'np0005625200.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005625199.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625200.localdomain", "mon"], "delta": "0:00:00.747498", "end": "2026-02-20 09:44:15.307662", "item": ["np0005625200.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:44:14.560164", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005625200.localdomain", "stdout_lines": ["Removed label mon from host np0005625200.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625200.localdomain", "mgr"], "delta": "0:00:00.730487", "end": "2026-02-20 09:44:16.638542", "item": ["np0005625200.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-20 09:44:15.908055", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005625200.localdomain", "stdout_lines": ["Removed label mgr from host np0005625200.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625200.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625200.localdomain", "_admin"], "delta": "0:00:00.750627", "end": "2026-02-20 09:44:18.030755", "item": ["np0005625200.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:44:17.280128", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005625200.localdomain", "stdout_lines": ["Removed label _admin from host np0005625200.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:44:18.188199", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:44:28.201721", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005625200.localdomain"], "delta": "0:00:00.746595", "end": "2026-02-20 09:44:29.760683", "msg": "", "rc": 0, "start": "2026-02-20 09:44:29.014088", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005625200.localdomain'\ntype id \n-------------------- ---------------\ncrash np0005625200 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005625200.localdomain'", "type id ", "-------------------- ---------------", "crash np0005625200 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005625200.localdomain", "-f", "json"], "delta": "0:00:00.701744", "end": "2026-02-20 09:44:31.132479", "msg": "", "rc": 0, "start": "2026-02-20 09:44:30.430735", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.104\", \"hostname\": \"np0005625200.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.104\", \"hostname\": \"np0005625200.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005625200.localdomain", "--force"], "delta": "0:00:00.721004", "end": "2026-02-20 09:44:32.500918", "msg": "", "rc": 0, "start": "2026-02-20 09:44:31.779914", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005625200.localdomain'", "stdout_lines": ["Removed host 'np0005625200.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005625199.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.472703.2026-02-20@09:44:33~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005625199.localdomain -> np0005625200.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.104/24", "dev", "vlan21"], "delta": "0:00:00.007467", "end": "2026-02-20 09:44:34.324805", "msg": "", "rc": 0, "start": "2026-02-20 09:44:34.317338", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005625199.localdomain -> np0005625203.localdomain(192.168.122.107)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.294416.2026-02-20@09:44:35~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005625199.localdomain -> np0005625203.localdomain(192.168.122.107)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.104/24", "dev", "vlan21"], "delta": "0:00:00.004954", "end": "2026-02-20 09:44:36.609657", "msg": "", "rc": 0, "start": "2026-02-20 09:44:36.604703", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005625199.localdomain -> np0005625203.localdomain(192.168.122.107)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.104"], "delta": "0:00:02.087967", "end": "2026-02-20 09:44:39.506448", "msg": "", "rc": 0, "start": "2026-02-20 09:44:37.418481", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.104 (172.18.0.104) 56(84) bytes of data.\n64 bytes from 172.18.0.104: icmp_seq=1 ttl=64 time=0.075 ms\n64 bytes from 172.18.0.104: icmp_seq=2 ttl=64 time=0.085 ms\n64 bytes from 172.18.0.104: icmp_seq=3 ttl=64 time=0.957 ms\n\n--- 172.18.0.104 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2080ms\nrtt min/avg/max/mdev = 0.075/0.372/0.957/0.413 ms", "stdout_lines": ["PING 172.18.0.104 (172.18.0.104) 56(84) bytes of data.", "64 bytes from 172.18.0.104: icmp_seq=1 ttl=64 time=0.075 ms", "64 bytes from 172.18.0.104: icmp_seq=2 ttl=64 time=0.085 ms", "64 bytes from 172.18.0.104: icmp_seq=3 ttl=64 time=0.957 ms", "", "--- 172.18.0.104 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2080ms", "rtt min/avg/max/mdev = 0.075/0.372/0.957/0.413 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.687903", "end": "2026-02-20 09:44:40.928330", "rc": 0, "start": "2026-02-20 09:44:40.240427", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625203", "-f", "json"], "delta": "0:00:00.623782", "end": "2026-02-20 09:44:42.142896", "msg": "", "rc": 0, "start": "2026-02-20 09:44:41.519114", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"2e1f82df8ee4\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.68%\", \"created\": \"2026-02-20T09:41:22.759079Z\", \"daemon_id\": \"np0005625203\", \"daemon_name\": \"mon.np0005625203\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:44:15.261735Z daemon:mon.np0005625203 [INFO] \\\"Reconfigured mon.np0005625203 on host 'np0005625203.localdomain'\\\"\"], \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:43:56.516566Z\", \"memory_request\": 2147483648, \"memory_usage\": 47615836, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:22.644707Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"2e1f82df8ee4\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.68%\", \"created\": \"2026-02-20T09:41:22.759079Z\", \"daemon_id\": \"np0005625203\", \"daemon_name\": \"mon.np0005625203\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:44:15.261735Z daemon:mon.np0005625203 [INFO] \\\"Reconfigured mon.np0005625203 on host 'np0005625203.localdomain'\\\"\"], \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:43:56.516566Z\", \"memory_request\": 2147483648, \"memory_usage\": 47615836, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:22.644707Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005625203", "--force"], "delta": "0:00:02.192642", "end": "2026-02-20 09:44:44.988524", "msg": "", "rc": 0, "start": "2026-02-20 09:44:42.795882", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005625203 from host 'np0005625203.localdomain'", "stdout_lines": ["Removed mon.np0005625203 from host 'np0005625203.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:44:45.146055", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:44:55.160690", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005625203.localdomain] *********** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005625203.localdomain] *********** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005625203.localdomain:172.18.0.104"], "delta": "0:00:03.245760", "end": "2026-02-20 09:44:59.049582", "msg": "", "rc": 0, "start": "2026-02-20 09:44:55.803822", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005625203 on host 'np0005625203.localdomain'", "stdout_lines": ["Deployed mon.np0005625203 on host 'np0005625203.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:44:59.192230", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:45:09.205337", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.764673", "end": "2026-02-20 09:45:10.501644", "msg": "", "rc": 0, "start": "2026-02-20 09:45:09.736971", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":46,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625201\",\"np0005625204\",\"np0005625202\"],\"quorum_age\":21,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":88,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109574332,\"bytes_used\":611749888,\"bytes_avail\":44460240896,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":83,\"modified\":\"2026-02-20T09:45:02.161735+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":46,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625201\",\"np0005625204\",\"np0005625202\"],\"quorum_age\":21,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":88,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109574332,\"bytes_used\":611749888,\"bytes_avail\":44460240896,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":83,\"modified\":\"2026-02-20T09:45:02.161735+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:03.900253", "end": "2026-02-20 09:45:15.086685", "msg": "", "rc": 0, "start": "2026-02-20 09:45:11.186432", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005625202.localdomain'\nScheduled to reconfig osd.5 on host 'np0005625202.localdomain'\nScheduled to reconfig osd.1 on host 'np0005625203.localdomain'\nScheduled to reconfig osd.4 on host 'np0005625203.localdomain'\nScheduled to reconfig osd.0 on host 'np0005625204.localdomain'\nScheduled to reconfig osd.3 on host 'np0005625204.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005625202.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005625202.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005625203.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005625203.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005625204.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005625204.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005625199.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.790875", "end": "2026-02-20 09:45:16.751708", "msg": "", "rc": 0, "start": "2026-02-20 09:45:15.960833", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:45:16.901902", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:45:26.915864", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005625199.localdomain: jid=j746080705601.489785 changed: [np0005625199.localdomain] => {"ansible_job_id": "j746080705601.489785", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.616104", "end": "2026-02-20 09:45:28.311408", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j746080705601.489785", "start": "2026-02-20 09:45:27.695304", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.733802", "end": "2026-02-20 09:45:31.069166", "rc": 0, "start": "2026-02-20 09:45:30.335364", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625203", "-f", "json"], "delta": "0:00:00.683268", "end": "2026-02-20 09:45:32.470046", "msg": "", "rc": 0, "start": "2026-02-20 09:45:31.786778", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"1ca6fa81c11f\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.63%\", \"created\": \"2026-02-20T09:44:58.814756Z\", \"daemon_id\": \"np0005625203\", \"daemon_name\": \"mon.np0005625203\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:45:18.840226Z\", \"memory_request\": 2147483648, \"memory_usage\": 58174996, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:44:58.717512Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"1ca6fa81c11f\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.63%\", \"created\": \"2026-02-20T09:44:58.814756Z\", \"daemon_id\": \"np0005625203\", \"daemon_name\": \"mon.np0005625203\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625203.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:45:18.840226Z\", \"memory_request\": 2147483648, \"memory_usage\": 58174996, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:44:58.717512Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005625199.localdomain] => { "msg": "Migrate mon: np0005625201.localdomain to node: np0005625204.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005625199.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.105"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.730851", "end": "2026-02-20 09:45:34.211023", "msg": "", "rc": 0, "start": "2026-02-20 09:45:33.480172", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":46,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625201\",\"np0005625204\",\"np0005625202\"],\"quorum_age\":45,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":616030208,\"bytes_avail\":44455960576,\"bytes_total\":45071990784,\"read_bytes_sec\":17569,\"write_bytes_sec\":0,\"read_op_per_sec\":7,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":83,\"modified\":\"2026-02-20T09:45:02.161735+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":46,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625201\",\"np0005625204\",\"np0005625202\"],\"quorum_age\":45,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":616030208,\"bytes_avail\":44455960576,\"bytes_total\":45071990784,\"read_bytes_sec\":17569,\"write_bytes_sec\":0,\"read_op_per_sec\":7,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":83,\"modified\":\"2026-02-20T09:45:02.161735+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005625199.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => {"changed": false, "examined": 2, "files": [{"atime": 1771580720.9930778, "ctime": 1771580721.436089, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1149240612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771580721.2150834, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771580722.3071115, "ctime": 1771580722.8181245, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1149311618, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771580722.5691183, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 2, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 271, 'inode': 1149240612, 'dev': 64516, 'nlink': 1, 'atime': 1771580720.9930778, 'mtime': 1771580721.2150834, 'ctime': 1771580721.436089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "dee5c4b17e9c442a71a88dfb23993e184840fac5", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1771580720.9930778, "ctime": 1771580721.436089, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1149240612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771580721.2150834, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "06e58f246310b9e5a7f7e423a1207c8a", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 271, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 1149311618, 'dev': 64516, 'nlink': 1, 'atime': 1771580722.3071115, 'mtime': 1771580722.5691183, 'ctime': 1771580722.8181245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "5d0ac48a08c9259d9227b20d376d9a209a188f22", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1771580722.3071115, "ctime": 1771580722.8181245, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1149311618, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771580722.5691183, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "5fa07c3c2dd6bc5b069351fa5b7406dc", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.714244", "end": "2026-02-20 09:45:39.339547", "msg": "", "rc": 0, "start": "2026-02-20 09:45:38.625303", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":36,\"available\":true,\"active_name\":\"np0005625203.lonygy\",\"num_standby\":3}", "stdout_lines": ["", "{\"epoch\":36,\"available\":true,\"active_name\":\"np0005625203.lonygy\",\"num_standby\":3}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005625203.lonygy", "available": true, "epoch": 36, "num_standby": 3}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "mgr.active_name | regex_search(cur_mon | split('.') | first) or mgr.active_name | regex_search(target_node | split('.') | first)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005625199.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625201", "-f", "json"], "delta": "0:00:04.486049", "end": "2026-02-20 09:45:44.811012", "msg": "", "rc": 0, "start": "2026-02-20 09:45:40.324963", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"3ee891b808cb\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.18%\", \"created\": \"2026-02-20T07:39:02.895698Z\", \"daemon_id\": \"np0005625201\", \"daemon_name\": \"mon.np0005625201\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:45:33.911498Z daemon:mon.np0005625201 [INFO] \\\"Reconfigured mon.np0005625201 on host 'np0005625201.localdomain'\\\"\"], \"hostname\": \"np0005625201.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:45:18.778414Z\", \"memory_request\": 2147483648, \"memory_usage\": 151204659, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T07:39:02.764371Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"3ee891b808cb\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.18%\", \"created\": \"2026-02-20T07:39:02.895698Z\", \"daemon_id\": \"np0005625201\", \"daemon_name\": \"mon.np0005625201\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:45:33.911498Z daemon:mon.np0005625201 [INFO] \\\"Reconfigured mon.np0005625201 on host 'np0005625201.localdomain'\\\"\"], \"hostname\": \"np0005625201.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:45:18.778414Z\", \"memory_request\": 2147483648, \"memory_usage\": 151204659, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T07:39:02.764371Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005625201", "--force"], "delta": "0:00:02.648639", "end": "2026-02-20 09:45:48.351575", "msg": "", "rc": 0, "start": "2026-02-20 09:45:45.702936", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005625201 from host 'np0005625201.localdomain'", "stdout_lines": ["Removed mon.np0005625201 from host 'np0005625201.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005625199.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625201.localdomain", "mon"], "delta": "0:00:05.266164", "end": "2026-02-20 09:45:54.471755", "item": ["np0005625201.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-20 09:45:49.205591", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005625201.localdomain", "stdout_lines": ["Removed label mon from host np0005625201.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625201.localdomain", "mgr"], "delta": "0:00:00.664234", "end": "2026-02-20 09:45:55.784415", "item": ["np0005625201.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-20 09:45:55.120181", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005625201.localdomain", "stdout_lines": ["Removed label mgr from host np0005625201.localdomain"]} changed: [np0005625199.localdomain] => (item=['np0005625201.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005625201.localdomain", "_admin"], "delta": "0:00:00.730109", "end": "2026-02-20 09:45:57.129187", "item": ["np0005625201.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-20 09:45:56.399078", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005625201.localdomain", "stdout_lines": ["Removed label _admin from host np0005625201.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:45:57.265092", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:46:07.278360", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005625201.localdomain"], "delta": "0:00:00.915385", "end": "2026-02-20 09:46:09.141633", "msg": "", "rc": 0, "start": "2026-02-20 09:46:08.226248", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005625201.localdomain'\ntype id \n-------------------- ---------------\ncrash np0005625201 \nmon np0005625201 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005625201.localdomain'", "type id ", "-------------------- ---------------", "crash np0005625201 ", "mon np0005625201 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005625201.localdomain", "-f", "json"], "delta": "0:00:00.710939", "end": "2026-02-20 09:46:10.922130", "msg": "", "rc": 0, "start": "2026-02-20 09:46:10.211191", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.105\", \"hostname\": \"np0005625201.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.105\", \"hostname\": \"np0005625201.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005625201.localdomain", "--force"], "delta": "0:00:00.639085", "end": "2026-02-20 09:46:12.156819", "msg": "", "rc": 0, "start": "2026-02-20 09:46:11.517734", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005625201.localdomain'", "stdout_lines": ["Removed host 'np0005625201.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005625199.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.473675.2026-02-20@09:46:13~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005625199.localdomain -> np0005625201.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.105/24", "dev", "vlan21"], "delta": "0:00:00.007136", "end": "2026-02-20 09:46:14.625667", "msg": "", "rc": 0, "start": "2026-02-20 09:46:14.618531", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005625199.localdomain -> np0005625204.localdomain(192.168.122.108)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.299849.2026-02-20@09:46:16~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005625199.localdomain -> np0005625204.localdomain(192.168.122.108)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.105/24", "dev", "vlan21"], "delta": "0:00:00.005651", "end": "2026-02-20 09:46:17.415885", "msg": "", "rc": 0, "start": "2026-02-20 09:46:17.410234", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005625199.localdomain -> np0005625204.localdomain(192.168.122.108)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.105"], "delta": "0:00:02.062638", "end": "2026-02-20 09:46:20.254425", "msg": "", "rc": 0, "start": "2026-02-20 09:46:18.191787", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.105 (172.18.0.105) 56(84) bytes of data.\n64 bytes from 172.18.0.105: icmp_seq=1 ttl=64 time=0.089 ms\n64 bytes from 172.18.0.105: icmp_seq=2 ttl=64 time=0.065 ms\n64 bytes from 172.18.0.105: icmp_seq=3 ttl=64 time=0.082 ms\n\n--- 172.18.0.105 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2054ms\nrtt min/avg/max/mdev = 0.065/0.078/0.089/0.010 ms", "stdout_lines": ["PING 172.18.0.105 (172.18.0.105) 56(84) bytes of data.", "64 bytes from 172.18.0.105: icmp_seq=1 ttl=64 time=0.089 ms", "64 bytes from 172.18.0.105: icmp_seq=2 ttl=64 time=0.065 ms", "64 bytes from 172.18.0.105: icmp_seq=3 ttl=64 time=0.082 ms", "", "--- 172.18.0.105 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2054ms", "rtt min/avg/max/mdev = 0.065/0.078/0.089/0.010 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.766249", "end": "2026-02-20 09:46:21.770159", "rc": 0, "start": "2026-02-20 09:46:21.003910", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625204", "-f", "json"], "delta": "0:00:00.669085", "end": "2026-02-20 09:46:23.161554", "msg": "", "rc": 0, "start": "2026-02-20 09:46:22.492469", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"4047f9576a63\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.00%\", \"created\": \"2026-02-20T09:41:20.076555Z\", \"daemon_id\": \"np0005625204\", \"daemon_name\": \"mon.np0005625204\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:45:32.487353Z daemon:mon.np0005625204 [INFO] \\\"Reconfigured mon.np0005625204 on host 'np0005625204.localdomain'\\\"\"], \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:45:19.096444Z\", \"memory_request\": 2147483648, \"memory_usage\": 59412316, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:19.985631Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"4047f9576a63\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.00%\", \"created\": \"2026-02-20T09:41:20.076555Z\", \"daemon_id\": \"np0005625204\", \"daemon_name\": \"mon.np0005625204\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-20T09:45:32.487353Z daemon:mon.np0005625204 [INFO] \\\"Reconfigured mon.np0005625204 on host 'np0005625204.localdomain'\\\"\"], \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:45:19.096444Z\", \"memory_request\": 2147483648, \"memory_usage\": 59412316, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:41:19.985631Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005625204", "--force"], "delta": "0:00:02.232902", "end": "2026-02-20 09:46:26.067784", "msg": "", "rc": 0, "start": "2026-02-20 09:46:23.834882", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005625204 from host 'np0005625204.localdomain'", "stdout_lines": ["Removed mon.np0005625204 from host 'np0005625204.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:46:26.250468", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:46:36.263175", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005625204.localdomain] *********** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005625204.localdomain] *********** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005625204.localdomain:172.18.0.105"], "delta": "0:00:03.144831", "end": "2026-02-20 09:46:40.001046", "msg": "", "rc": 0, "start": "2026-02-20 09:46:36.856215", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005625204 on host 'np0005625204.localdomain'", "stdout_lines": ["Deployed mon.np0005625204 on host 'np0005625204.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:46:40.151261", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:46:50.164966", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:01.394612", "end": "2026-02-20 09:46:52.121208", "msg": "", "rc": 0, "start": "2026-02-20 09:46:50.726596", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625202\",\"np0005625203\",\"np0005625204\"],\"quorum_age\":0,\"monmap\":{\"epoch\":17,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575738,\"bytes_used\":616022016,\"bytes_avail\":44455968768,\"bytes_total\":45071990784,\"write_bytes_sec\":255,\"read_op_per_sec\":0,\"write_op_per_sec\":0},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":88,\"modified\":\"2026-02-20T09:46:46.676614+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625202\",\"np0005625203\",\"np0005625204\"],\"quorum_age\":0,\"monmap\":{\"epoch\":17,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575738,\"bytes_used\":616022016,\"bytes_avail\":44455968768,\"bytes_total\":45071990784,\"write_bytes_sec\":255,\"read_op_per_sec\":0,\"write_op_per_sec\":0},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":88,\"modified\":\"2026-02-20T09:46:46.676614+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:01.025915", "end": "2026-02-20 09:46:53.776220", "msg": "", "rc": 0, "start": "2026-02-20 09:46:52.750305", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005625202.localdomain'\nScheduled to reconfig osd.5 on host 'np0005625202.localdomain'\nScheduled to reconfig osd.1 on host 'np0005625203.localdomain'\nScheduled to reconfig osd.4 on host 'np0005625203.localdomain'\nScheduled to reconfig osd.0 on host 'np0005625204.localdomain'\nScheduled to reconfig osd.3 on host 'np0005625204.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005625202.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005625202.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005625203.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005625203.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005625204.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005625204.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005625199.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005625199.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:03.878557", "end": "2026-02-20 09:46:58.454219", "msg": "", "rc": 0, "start": "2026-02-20 09:46:54.575662", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-20 09:46:58.609208", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-20 09:47:08.621900", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC POLL on np0005625199.localdomain: jid=j946828472378.493311 started=1 finished=0 ASYNC POLL on np0005625199.localdomain: jid=j946828472378.493311 started=1 finished=0 ASYNC OK on np0005625199.localdomain: jid=j946828472378.493311 changed: [np0005625199.localdomain] => {"ansible_job_id": "j946828472378.493311", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:03.847130", "end": "2026-02-20 09:47:13.345128", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j946828472378.493311", "start": "2026-02-20 09:47:09.497998", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005625199.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.800969", "end": "2026-02-20 09:47:15.951811", "rc": 0, "start": "2026-02-20 09:47:15.150842", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005625199.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005625199.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005625204", "-f", "json"], "delta": "0:00:00.702174", "end": "2026-02-20 09:47:17.408239", "msg": "", "rc": 0, "start": "2026-02-20 09:47:16.706065", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"b8341c3094c9\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.71%\", \"created\": \"2026-02-20T09:46:39.754806Z\", \"daemon_id\": \"np0005625204\", \"daemon_name\": \"mon.np0005625204\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:47:00.746869Z\", \"memory_request\": 2147483648, \"memory_usage\": 44029706, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:46:39.654410Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"b8341c3094c9\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.71%\", \"created\": \"2026-02-20T09:46:39.754806Z\", \"daemon_id\": \"np0005625204\", \"daemon_name\": \"mon.np0005625204\", \"daemon_type\": \"mon\", \"hostname\": \"np0005625204.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-20T09:47:00.746869Z\", \"memory_request\": 2147483648, \"memory_usage\": 44029706, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-20T09:46:39.654410Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Sleep before moving to the next mon] ********************** Pausing for 30 seconds ok: [np0005625199.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2026-02-20 09:47:17.615341", "stderr": "", "stdout": "Paused for 30.03 seconds", "stop": "2026-02-20 09:47:47.648999", "user_input": ""} TASK [ceph_migrate : POST - Dump logs] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_load.yaml for np0005625199.localdomain TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005625199.localdomain] => {"changed": false, "examined": 4, "files": [{"atime": 1771580502.4716907, "ctime": 1771580502.3216867, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 343981884, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771580450.443359, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 142, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771580502.4766908, "ctime": 1771580502.3216867, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 251722529, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771573058.1703396, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771574314.1088104, "ctime": 1771580502.3216867, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 251722530, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771574311.8547492, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.openstack.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771574315.2228405, "ctime": 1771580502.3216867, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 251722531, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771574312.7447734, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.manila.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 4, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Restore files] ******************************************** changed: [np0005625199.localdomain] => (item=ceph.conf) => {"ansible_loop_var": "item", "changed": true, "checksum": "ddbe7288c21bf3a53467024ece3d83cdd614ede0", "dest": "/etc/ceph/ceph.conf", "gid": 0, "group": "root", "item": "ceph.conf", "md5sum": "e3d4aebd9e2ff146607bc253a0a5bd18", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 142, "src": "/home/tripleo-admin/ceph_client/ceph.conf", "state": "file", "uid": 0} changed: [np0005625199.localdomain] => (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": true, "checksum": "5d0ac48a08c9259d9227b20d376d9a209a188f22", "dest": "/etc/ceph/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": "ceph.client.admin.keyring", "md5sum": "5fa07c3c2dd6bc5b069351fa5b7406dc", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 151, "src": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005625199.localdomain] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client/logs", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Get Ceph Health] ****************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "-s", "-f", "json"], "delta": "0:00:04.581283", "end": "2026-02-20 09:47:55.336762", "msg": "", "rc": 0, "start": "2026-02-20 09:47:50.755479", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625202\",\"np0005625203\",\"np0005625204\"],\"quorum_age\":63,\"monmap\":{\"epoch\":17,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":90,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109576580,\"bytes_used\":616099840,\"bytes_avail\":44455890944,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":88,\"modified\":\"2026-02-20T09:46:46.676614+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005625202\",\"np0005625203\",\"np0005625204\"],\"quorum_age\":63,\"monmap\":{\"epoch\":17,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":90,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771573196,\"num_in_osds\":6,\"osd_in_since\":1771573176,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109576580,\"bytes_used\":616099840,\"bytes_avail\":44455890944,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005625203.zsrwgk\",\"status\":\"up:active\",\"gid\":26854}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":88,\"modified\":\"2026-02-20T09:46:46.676614+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Load ceph data] ******************************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"ceph": {"election_epoch": 68, "fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "fsmap": {"by_rank": [{"filesystem_id": 1, "gid": 26854, "name": "mds.np0005625203.zsrwgk", "rank": 0, "status": "up:active"}], "epoch": 17, "id": 1, "in": 1, "max": 1, "up": 1, "up:standby": 2}, "health": {"checks": {"CEPHADM_STRAY_DAEMON": {"muted": false, "severity": "HEALTH_WARN", "summary": {"count": 1, "message": "1 stray daemon(s) not managed by cephadm"}}, "CEPHADM_STRAY_HOST": {"muted": false, "severity": "HEALTH_WARN", "summary": {"count": 1, "message": "1 stray host(s) with 1 daemon(s) not managed by cephadm"}}}, "mutes": [], "status": "HEALTH_WARN"}, "mgrmap": {"available": true, "modules": ["cephadm", "iostat", "nfs", "restful"], "num_standbys": 2, "services": {}}, "monmap": {"epoch": 17, "min_mon_release_name": "reef", "num_mons": 3}, "osdmap": {"epoch": 90, "num_in_osds": 6, "num_osds": 6, "num_remapped_pgs": 0, "num_up_osds": 6, "osd_in_since": 1771573176, "osd_up_since": 1771573196}, "pgmap": {"bytes_avail": 44455890944, "bytes_total": 45071990784, "bytes_used": 616099840, "data_bytes": 109576580, "num_objects": 69, "num_pgs": 177, "num_pools": 7, "pgs_by_state": [{"count": 177, "state_name": "active+clean"}]}, "progress_events": {}, "quorum": [0, 1, 2], "quorum_age": 63, "quorum_names": ["np0005625202", "np0005625203", "np0005625204"], "servicemap": {"epoch": 88, "modified": "2026-02-20T09:46:46.676614+0000", "services": {}}}}, "changed": false} TASK [ceph_migrate : Dump ceph -s output to log file] ************************** changed: [np0005625199.localdomain] => {"changed": true, "checksum": "12c9e1a5079402dc5ff739f96fe4938afd2515d3", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_health.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "878c073edad9de25b257896f548cf215", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1436, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580875.601193-64713-273995098298106/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph Orch ServiceMap] ********************************* changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "ls", "-f", "json"], "delta": "0:00:04.505443", "end": "2026-02-20 09:48:01.572854", "msg": "", "rc": 0, "start": "2026-02-20 09:47:57.067411", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-20T07:37:30.228656Z\", \"last_refresh\": \"2026-02-20T09:47:00.575455Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mds\"}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-20T09:39:27.146998Z\", \"last_refresh\": \"2026-02-20T09:47:00.575989Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mgr\"}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-20T09:40:56.319844Z\", \"last_refresh\": \"2026-02-20T09:47:00.576142Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T09:47:18.511548Z service:mon [INFO] \\\"service was created\\\"\"], \"placement\": {\"label\": \"mon\"}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-20T09:47:15.782791Z\", \"last_refresh\": \"2026-02-20T09:47:00.576271Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-20T07:37:44.380465Z\", \"running\": 0, \"size\": 0}}, {\"placement\": {\"hosts\": [\"np0005625202.localdomain\", \"np0005625203.localdomain\", \"np0005625204.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-20T07:38:11.428636Z\", \"last_refresh\": \"2026-02-20T09:47:00.575629Z\", \"running\": 6, \"size\": 6}}]", "stdout_lines": ["", "[{\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-20T07:37:30.228656Z\", \"last_refresh\": \"2026-02-20T09:47:00.575455Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mds\"}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-20T09:39:27.146998Z\", \"last_refresh\": \"2026-02-20T09:47:00.575989Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mgr\"}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-20T09:40:56.319844Z\", \"last_refresh\": \"2026-02-20T09:47:00.576142Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-20T09:47:18.511548Z service:mon [INFO] \\\"service was created\\\"\"], \"placement\": {\"label\": \"mon\"}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-20T09:47:15.782791Z\", \"last_refresh\": \"2026-02-20T09:47:00.576271Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-20T07:37:44.380465Z\", \"running\": 0, \"size\": 0}}, {\"placement\": {\"hosts\": [\"np0005625202.localdomain\", \"np0005625203.localdomain\", \"np0005625204.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-20T07:38:11.428636Z\", \"last_refresh\": \"2026-02-20T09:47:00.575629Z\", \"running\": 6, \"size\": 6}}]"]} TASK [ceph_migrate : Load Service Map] ***************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"servicemap": [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-20T07:37:30.228656Z", "last_refresh": "2026-02-20T09:47:00.575455Z", "running": 3, "size": 3}}, {"placement": {"label": "mds"}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-20T09:39:27.146998Z", "last_refresh": "2026-02-20T09:47:00.575989Z", "running": 3, "size": 3}}, {"placement": {"label": "mgr"}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-20T09:40:56.319844Z", "last_refresh": "2026-02-20T09:47:00.576142Z", "running": 3, "size": 3}}, {"events": ["2026-02-20T09:47:18.511548Z service:mon [INFO] \"service was created\""], "placement": {"label": "mon"}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-20T09:47:15.782791Z", "last_refresh": "2026-02-20T09:47:00.576271Z", "running": 3, "size": 3}}, {"placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-20T07:37:44.380465Z", "running": 0, "size": 0}}, {"placement": {"hosts": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-20T07:38:11.428636Z", "last_refresh": "2026-02-20T09:47:00.575629Z", "running": 6, "size": 6}}]}, "changed": false} TASK [ceph_migrate : Print Service Map] **************************************** skipping: [np0005625199.localdomain] => (item={'placement': {'host_pattern': '*'}, 'service_name': 'crash', 'service_type': 'crash', 'status': {'created': '2026-02-20T07:37:30.228656Z', 'last_refresh': '2026-02-20T09:47:00.575455Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-20T07:37:30.228656Z", "last_refresh": "2026-02-20T09:47:00.575455Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'placement': {'label': 'mds'}, 'service_id': 'mds', 'service_name': 'mds.mds', 'service_type': 'mds', 'status': {'created': '2026-02-20T09:39:27.146998Z', 'last_refresh': '2026-02-20T09:47:00.575989Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"label": "mds"}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-20T09:39:27.146998Z", "last_refresh": "2026-02-20T09:47:00.575989Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'placement': {'label': 'mgr'}, 'service_name': 'mgr', 'service_type': 'mgr', 'status': {'created': '2026-02-20T09:40:56.319844Z', 'last_refresh': '2026-02-20T09:47:00.576142Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"label": "mgr"}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-20T09:40:56.319844Z", "last_refresh": "2026-02-20T09:47:00.576142Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'events': ['2026-02-20T09:47:18.511548Z service:mon [INFO] "service was created"'], 'placement': {'label': 'mon'}, 'service_name': 'mon', 'service_type': 'mon', 'status': {'created': '2026-02-20T09:47:15.782791Z', 'last_refresh': '2026-02-20T09:47:00.576271Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-20T09:47:18.511548Z service:mon [INFO] \"service was created\""], "placement": {"label": "mon"}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-20T09:47:15.782791Z", "last_refresh": "2026-02-20T09:47:00.576271Z", "running": 3, "size": 3}}} skipping: [np0005625199.localdomain] => (item={'placement': {'host_pattern': '*'}, 'service_name': 'node-proxy', 'service_type': 'node-proxy', 'status': {'created': '2026-02-20T07:37:44.380465Z', 'running': 0, 'size': 0}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-20T07:37:44.380465Z", "running": 0, "size": 0}}} skipping: [np0005625199.localdomain] => (item={'placement': {'hosts': ['np0005625202.localdomain', 'np0005625203.localdomain', 'np0005625204.localdomain']}, 'service_id': 'default_drive_group', 'service_name': 'osd.default_drive_group', 'service_type': 'osd', 'spec': {'data_devices': {'paths': ['/dev/ceph_vg0/ceph_lv0', '/dev/ceph_vg1/ceph_lv1']}, 'filter_logic': 'AND', 'objectstore': 'bluestore'}, 'status': {'created': '2026-02-20T07:38:11.428636Z', 'last_refresh': '2026-02-20T09:47:00.575629Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"hosts": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-20T07:38:11.428636Z", "last_refresh": "2026-02-20T09:47:00.575629Z", "running": 6, "size": 6}}} skipping: [np0005625199.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch ls output to log file] ********************* changed: [np0005625199.localdomain] => {"changed": true, "checksum": "85d2a775aa9fade112c07dbdd73c81b25c67484c", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_orch_ls.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "ccd59b43a52e81544800e02349ef7650", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1600, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580881.9314249-64744-271573126080341/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph config] ****************************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "config", "dump", "-f", "json"], "delta": "0:00:04.268863", "end": "2026-02-20 09:48:07.674773", "msg": "", "rc": 0, "start": "2026-02-20 09:48:03.405910", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_haproxy\",\"value\":\"registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_keepalived\",\"value\":\"registry.redhat.io/rhceph/keepalived-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625202\",\"location_type\":\"host\",\"location_value\":\"np0005625202\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625203\",\"location_type\":\"host\",\"location_value\":\"np0005625203\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625204\",\"location_type\":\"host\",\"location_value\":\"np0005625204\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.np0005625203.zsrwgk\",\"name\":\"mds_join_fs\",\"value\":\"cephfs\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]", "stdout_lines": ["", "[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_haproxy\",\"value\":\"registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_keepalived\",\"value\":\"registry.redhat.io/rhceph/keepalived-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625202\",\"location_type\":\"host\",\"location_value\":\"np0005625202\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625203\",\"location_type\":\"host\",\"location_value\":\"np0005625203\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005625204\",\"location_type\":\"host\",\"location_value\":\"np0005625204\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.np0005625203.zsrwgk\",\"name\":\"mds_join_fs\",\"value\":\"cephfs\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]"]} TASK [ceph_migrate : Print Ceph config dump] *********************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph config dump output to log file] ***************** changed: [np0005625199.localdomain] => {"changed": true, "checksum": "270dc655052e0f57dfff5953c3339abcf2f30e7e", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_config_dump.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "dbf8e523ca84f195e6dd7ae7f8001766", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 3044, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580887.9212697-64773-14577118781893/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph Orch Host Map] *********************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "host", "ls", "-f", "json"], "delta": "0:00:04.790983", "end": "2026-02-20 09:48:14.298059", "msg": "", "rc": 0, "start": "2026-02-20 09:48:09.507076", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"addr\": \"192.168.122.106\", \"hostname\": \"np0005625202.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005625203.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005625204.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.106\", \"hostname\": \"np0005625202.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005625203.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005625204.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}]"]} TASK [ceph_migrate : Load nodes] *********************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"nds": [{"addr": "192.168.122.106", "hostname": "np0005625202.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}, {"addr": "192.168.122.107", "hostname": "np0005625203.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}, {"addr": "192.168.122.108", "hostname": "np0005625204.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}]}, "changed": false} TASK [ceph_migrate : Load hostmap List] **************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"hostmap": {"np0005625202.localdomain": ["osd", "mds", "mgr", "mon", "_admin"], "np0005625203.localdomain": ["osd", "mds", "mgr", "mon", "_admin"], "np0005625204.localdomain": ["osd", "mds", "mgr", "mon", "_admin"]}}, "changed": false} TASK [ceph_migrate : Print Host Map] ******************************************* skipping: [np0005625199.localdomain] => (item=np0005625202.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625202.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625203.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625203.localdomain"} skipping: [np0005625199.localdomain] => (item=np0005625204.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005625204.localdomain"} skipping: [np0005625199.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch host ls output to log file] **************** changed: [np0005625199.localdomain] => {"changed": true, "checksum": "6146561b9e26b9274883c7f0b4b57a7606baf21a", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_orch_host_ls.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "3a651361a8eb929781af899e26e9eb29", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 84, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580894.6875477-64804-55526477406700/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph monmap and load data] **************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "mon", "dump", "-f", "json"], "delta": "0:00:04.538477", "end": "2026-02-20 09:48:20.702779", "msg": "", "rc": 0, "start": "2026-02-20 09:48:16.164302", "stderr": "Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\ndumped monmap epoch 17", "stderr_lines": ["Inferring fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "dumped monmap epoch 17"], "stdout": "\n{\"epoch\":17,\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"modified\":\"2026-02-20T09:46:46.606881Z\",\"created\":\"2026-02-20T07:36:51.191305Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005625202\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005625203\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005625204\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}", "stdout_lines": ["", "{\"epoch\":17,\"fsid\":\"a8557ee9-b55d-5519-942c-cf8f6172f1d8\",\"modified\":\"2026-02-20T09:46:46.606881Z\",\"created\":\"2026-02-20T07:36:51.191305Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005625202\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005625203\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005625204\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}"]} TASK [ceph_migrate : Get Monmap] *********************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"mon_dump": {"created": "2026-02-20T07:36:51.191305Z", "disallowed_leaders: ": "", "election_strategy": 1, "epoch": 17, "features": {"optional": [], "persistent": ["kraken", "luminous", "mimic", "osdmap-prune", "nautilus", "octopus", "pacific", "elector-pinging", "quincy", "reef"]}, "fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "min_mon_release": 18, "min_mon_release_name": "reef", "modified": "2026-02-20T09:46:46.606881Z", "mons": [{"addr": "172.18.0.103:6789/0", "crush_location": "{}", "name": "np0005625202", "priority": 0, "public_addr": "172.18.0.103:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.103:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.103:6789", "nonce": 0, "type": "v1"}]}, "rank": 0, "weight": 0}, {"addr": "172.18.0.104:6789/0", "crush_location": "{}", "name": "np0005625203", "priority": 0, "public_addr": "172.18.0.104:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.104:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.104:6789", "nonce": 0, "type": "v1"}]}, "rank": 1, "weight": 0}, {"addr": "172.18.0.105:6789/0", "crush_location": "{}", "name": "np0005625204", "priority": 0, "public_addr": "172.18.0.105:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.105:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.105:6789", "nonce": 0, "type": "v1"}]}, "rank": 2, "weight": 0}], "quorum": [0, 1, 2], "removed_ranks: ": "", "stretch_mode": false, "tiebreaker_mon": ""}}, "changed": false} TASK [ceph_migrate : Print monmap] ********************************************* skipping: [np0005625199.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph mon dump output to log file] ******************** changed: [np0005625199.localdomain] => {"changed": true, "checksum": "bfbcc7c59aec719f1fcdf7eba372b546338938f6", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_mon_dump.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "8bbd3129cea68be636d42086a65d86b1", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1425, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580901.0158749-64833-142486226364831/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Load nodes to decommission] ******************************* ok: [np0005625199.localdomain] => {"ansible_facts": {"decomm_nodes": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "changed": false} TASK [ceph_migrate : Load target nodes] **************************************** ok: [np0005625199.localdomain] => {"ansible_facts": {"target_nodes": ["np0005625202.localdomain", "np0005625203.localdomain", "np0005625204.localdomain"]}, "changed": false} TASK [ceph_migrate : Print target nodes] *************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Print decomm_nodes] *************************************** skipping: [np0005625199.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Configure Swift to use rgw backend] *********************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Flush handlers to ensure mgr restart completes] *********** RUNNING HANDLER [ceph_migrate : restart mgr] *********************************** changed: [np0005625199.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "a8557ee9-b55d-5519-942c-cf8f6172f1d8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.668556", "end": "2026-02-20 09:48:23.422499", "msg": "", "rc": 0, "start": "2026-02-20 09:48:22.753943", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Install cephadm on all compute nodes] ********************* skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "groups['ComputeHCI'] is defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Force fail ceph mgr on first compute node] **************** skipping: [np0005625199.localdomain] => {"changed": false, "false_condition": "groups['ComputeHCI'] is defined", "skip_reason": "Conditional result was False"} PLAY RECAP ********************************************************************* np0005625199.localdomain : ok=239 changed=111 unreachable=0 failed=0 skipped=143 rescued=0 ignored=0