[WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (edpm_node_hostname). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (edpm_node_ip). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (source_galera_members). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (source_mariadb_ip). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (enable_tlse). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (neutron_qe_test). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (tobiko_qe_test). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (neutron_qe_dir). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (prelaunch_barbican_secret). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (os_cloud_name). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (standalone_ip). Using last defined value only. Using /home/zuul/src/review.rdoproject.org/rdo-jobs/playbooks/data_plane_adoption/ansible.cfg as config file PLAY [Externalize Ceph] ******************************************************** TASK [Gathering Facts] ********************************************************* ok: [np0005626459.localdomain] TASK [ceph_migrate : Check file in the src directory] ************************** [WARNING]: Skipped '/home/tripleo-admin/ceph_client' path due to this access issue: '/home/tripleo-admin/ceph_client' is not a directory ok: [np0005626459.localdomain] => {"changed": false, "examined": 0, "files": [], "matched": 0, "msg": "Not all paths examined, check warnings for details", "skipped_paths": {"/home/tripleo-admin/ceph_client": "'/home/tripleo-admin/ceph_client' is not a directory"}} TASK [ceph_migrate : Restore files] ******************************************** skipping: [np0005626459.localdomain] => (item=ceph.conf) => {"ansible_loop_var": "item", "changed": false, "false_condition": "dir_ceph_files.files | length > 0", "item": "ceph.conf", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": false, "false_condition": "dir_ceph_files.files | length > 0", "item": "ceph.client.admin.keyring", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure backup directory exists] *************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Health] ****************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "-s", "-f", "json"], "delta": "0:00:02.605954", "end": "2026-02-23 09:42:08.033822", "msg": "", "rc": 0, "start": "2026-02-23 09:42:05.427868", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":14,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626459\",\"np0005626461\",\"np0005626460\"],\"quorum_age\":7416,\"monmap\":{\"epoch\":3,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":77,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":592781312,\"bytes_avail\":44479209472,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":7,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626461.jyrtpf\",\"status\":\"up:active\",\"gid\":24259}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":65,\"modified\":\"2026-02-23T09:40:13.036004+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":14,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626459\",\"np0005626461\",\"np0005626460\"],\"quorum_age\":7416,\"monmap\":{\"epoch\":3,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":77,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":592781312,\"bytes_avail\":44479209472,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":7,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626461.jyrtpf\",\"status\":\"up:active\",\"gid\":24259}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":65,\"modified\":\"2026-02-23T09:40:13.036004+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Load ceph data] ******************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph": {"election_epoch": 14, "fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", "fsmap": {"by_rank": [{"filesystem_id": 1, "gid": 24259, "name": "mds.np0005626461.jyrtpf", "rank": 0, "status": "up:active"}], "epoch": 7, "id": 1, "in": 1, "max": 1, "up": 1, "up:standby": 2}, "health": {"checks": {}, "mutes": [], "status": "HEALTH_OK"}, "mgrmap": {"available": true, "modules": ["cephadm", "iostat", "nfs", "restful"], "num_standbys": 2, "services": {}}, "monmap": {"epoch": 3, "min_mon_release_name": "reef", "num_mons": 3}, "osdmap": {"epoch": 77, "num_in_osds": 6, "num_osds": 6, "num_remapped_pgs": 0, "num_up_osds": 6, "osd_in_since": 1771832330, "osd_up_since": 1771832350}, "pgmap": {"bytes_avail": 44479209472, "bytes_total": 45071990784, "bytes_used": 592781312, "data_bytes": 109571242, "num_objects": 69, "num_pgs": 177, "num_pools": 7, "pgs_by_state": [{"count": 177, "state_name": "active+clean"}]}, "progress_events": {}, "quorum": [0, 1, 2], "quorum_age": 7416, "quorum_names": ["np0005626459", "np0005626461", "np0005626460"], "servicemap": {"epoch": 65, "modified": "2026-02-23T09:40:13.036004+0000", "services": {}}}}, "changed": false} TASK [ceph_migrate : Dump ceph -s output to log file] ************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Orch ServiceMap] ********************************* changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "ls", "-f", "json"], "delta": "0:00:02.582791", "end": "2026-02-23 09:42:11.661155", "msg": "", "rc": 0, "start": "2026-02-23 09:42:09.078364", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"events\": [\"2026-02-23T07:38:44.085163Z service:crash [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-23T07:36:37.670053Z\", \"last_refresh\": \"2026-02-23T09:31:38.413087Z\", \"running\": 6, \"size\": 6}}, {\"events\": [\"2026-02-23T07:57:50.531649Z service:mds.mds [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005626459.localdomain\", \"np0005626460.localdomain\", \"np0005626461.localdomain\"]}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-23T07:57:40.790182Z\", \"last_refresh\": \"2026-02-23T09:31:38.413203Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T07:38:33.788052Z service:mgr [INFO] \\\"service was created\\\"\", \"2026-02-23T07:37:35.855438Z service:mgr [ERROR] \\\"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005626459.localdomain\", \"np0005626460.localdomain\", \"np0005626461.localdomain\"]}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-23T07:37:19.241739Z\", \"last_refresh\": \"2026-02-23T09:31:38.412961Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T07:38:25.074108Z service:mon [INFO] \\\"service was created\\\"\", \"2026-02-23T07:37:35.850568Z service:mon [ERROR] \\\"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005626459.localdomain\", \"np0005626460.localdomain\", \"np0005626461.localdomain\"]}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-23T07:37:19.232030Z\", \"last_refresh\": \"2026-02-23T09:31:38.412780Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T07:36:51.516905Z service:node-proxy [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-23T07:36:51.489181Z\", \"running\": 0, \"size\": 0}}, {\"events\": [\"2026-02-23T07:37:19.256893Z service:osd.default_drive_group [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005626463.localdomain\", \"np0005626465.localdomain\", \"np0005626466.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-23T07:37:19.249273Z\", \"last_refresh\": \"2026-02-23T09:34:54.151977Z\", \"running\": 6, \"size\": 6}}]", "stdout_lines": ["", "[{\"events\": [\"2026-02-23T07:38:44.085163Z service:crash [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-23T07:36:37.670053Z\", \"last_refresh\": \"2026-02-23T09:31:38.413087Z\", \"running\": 6, \"size\": 6}}, {\"events\": [\"2026-02-23T07:57:50.531649Z service:mds.mds [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005626459.localdomain\", \"np0005626460.localdomain\", \"np0005626461.localdomain\"]}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-23T07:57:40.790182Z\", \"last_refresh\": \"2026-02-23T09:31:38.413203Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T07:38:33.788052Z service:mgr [INFO] \\\"service was created\\\"\", \"2026-02-23T07:37:35.855438Z service:mgr [ERROR] \\\"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005626459.localdomain\", \"np0005626460.localdomain\", \"np0005626461.localdomain\"]}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-23T07:37:19.241739Z\", \"last_refresh\": \"2026-02-23T09:31:38.412961Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T07:38:25.074108Z service:mon [INFO] \\\"service was created\\\"\", \"2026-02-23T07:37:35.850568Z service:mon [ERROR] \\\"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005626459.localdomain\", \"np0005626460.localdomain\", \"np0005626461.localdomain\"]}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-23T07:37:19.232030Z\", \"last_refresh\": \"2026-02-23T09:31:38.412780Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T07:36:51.516905Z service:node-proxy [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-23T07:36:51.489181Z\", \"running\": 0, \"size\": 0}}, {\"events\": [\"2026-02-23T07:37:19.256893Z service:osd.default_drive_group [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005626463.localdomain\", \"np0005626465.localdomain\", \"np0005626466.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-23T07:37:19.249273Z\", \"last_refresh\": \"2026-02-23T09:34:54.151977Z\", \"running\": 6, \"size\": 6}}]"]} TASK [ceph_migrate : Load Service Map] ***************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"servicemap": [{"events": ["2026-02-23T07:38:44.085163Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-23T07:36:37.670053Z", "last_refresh": "2026-02-23T09:31:38.413087Z", "running": 6, "size": 6}}, {"events": ["2026-02-23T07:57:50.531649Z service:mds.mds [INFO] \"service was created\""], "placement": {"hosts": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-23T07:57:40.790182Z", "last_refresh": "2026-02-23T09:31:38.413203Z", "running": 3, "size": 3}}, {"events": ["2026-02-23T07:38:33.788052Z service:mgr [INFO] \"service was created\"", "2026-02-23T07:37:35.855438Z service:mgr [ERROR] \"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-23T07:37:19.241739Z", "last_refresh": "2026-02-23T09:31:38.412961Z", "running": 3, "size": 3}}, {"events": ["2026-02-23T07:38:25.074108Z service:mon [INFO] \"service was created\"", "2026-02-23T07:37:35.850568Z service:mon [ERROR] \"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-23T07:37:19.232030Z", "last_refresh": "2026-02-23T09:31:38.412780Z", "running": 3, "size": 3}}, {"events": ["2026-02-23T07:36:51.516905Z service:node-proxy [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-23T07:36:51.489181Z", "running": 0, "size": 0}}, {"events": ["2026-02-23T07:37:19.256893Z service:osd.default_drive_group [INFO] \"service was created\""], "placement": {"hosts": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-23T07:37:19.249273Z", "last_refresh": "2026-02-23T09:34:54.151977Z", "running": 6, "size": 6}}]}, "changed": false} TASK [ceph_migrate : Print Service Map] **************************************** skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T07:38:44.085163Z service:crash [INFO] "service was created"'], 'placement': {'host_pattern': '*'}, 'service_name': 'crash', 'service_type': 'crash', 'status': {'created': '2026-02-23T07:36:37.670053Z', 'last_refresh': '2026-02-23T09:31:38.413087Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T07:38:44.085163Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-23T07:36:37.670053Z", "last_refresh": "2026-02-23T09:31:38.413087Z", "running": 6, "size": 6}}} skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T07:57:50.531649Z service:mds.mds [INFO] "service was created"'], 'placement': {'hosts': ['np0005626459.localdomain', 'np0005626460.localdomain', 'np0005626461.localdomain']}, 'service_id': 'mds', 'service_name': 'mds.mds', 'service_type': 'mds', 'status': {'created': '2026-02-23T07:57:40.790182Z', 'last_refresh': '2026-02-23T09:31:38.413203Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T07:57:50.531649Z service:mds.mds [INFO] \"service was created\""], "placement": {"hosts": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-23T07:57:40.790182Z", "last_refresh": "2026-02-23T09:31:38.413203Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T07:38:33.788052Z service:mgr [INFO] "service was created"', '2026-02-23T07:37:35.855438Z service:mgr [ERROR] "Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts"'], 'placement': {'hosts': ['np0005626459.localdomain', 'np0005626460.localdomain', 'np0005626461.localdomain']}, 'service_name': 'mgr', 'service_type': 'mgr', 'status': {'created': '2026-02-23T07:37:19.241739Z', 'last_refresh': '2026-02-23T09:31:38.412961Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T07:38:33.788052Z service:mgr [INFO] \"service was created\"", "2026-02-23T07:37:35.855438Z service:mgr [ERROR] \"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-23T07:37:19.241739Z", "last_refresh": "2026-02-23T09:31:38.412961Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T07:38:25.074108Z service:mon [INFO] "service was created"', '2026-02-23T07:37:35.850568Z service:mon [ERROR] "Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts"'], 'placement': {'hosts': ['np0005626459.localdomain', 'np0005626460.localdomain', 'np0005626461.localdomain']}, 'service_name': 'mon', 'service_type': 'mon', 'status': {'created': '2026-02-23T07:37:19.232030Z', 'last_refresh': '2026-02-23T09:31:38.412780Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T07:38:25.074108Z service:mon [INFO] \"service was created\"", "2026-02-23T07:37:35.850568Z service:mon [ERROR] \"Failed to apply: Cannot place on np0005626461.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-23T07:37:19.232030Z", "last_refresh": "2026-02-23T09:31:38.412780Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T07:36:51.516905Z service:node-proxy [INFO] "service was created"'], 'placement': {'host_pattern': '*'}, 'service_name': 'node-proxy', 'service_type': 'node-proxy', 'status': {'created': '2026-02-23T07:36:51.489181Z', 'running': 0, 'size': 0}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T07:36:51.516905Z service:node-proxy [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-23T07:36:51.489181Z", "running": 0, "size": 0}}} skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T07:37:19.256893Z service:osd.default_drive_group [INFO] "service was created"'], 'placement': {'hosts': ['np0005626463.localdomain', 'np0005626465.localdomain', 'np0005626466.localdomain']}, 'service_id': 'default_drive_group', 'service_name': 'osd.default_drive_group', 'service_type': 'osd', 'spec': {'data_devices': {'paths': ['/dev/ceph_vg0/ceph_lv0', '/dev/ceph_vg1/ceph_lv1']}, 'filter_logic': 'AND', 'objectstore': 'bluestore'}, 'status': {'created': '2026-02-23T07:37:19.249273Z', 'last_refresh': '2026-02-23T09:34:54.151977Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T07:37:19.256893Z service:osd.default_drive_group [INFO] \"service was created\""], "placement": {"hosts": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-23T07:37:19.249273Z", "last_refresh": "2026-02-23T09:34:54.151977Z", "running": 6, "size": 6}}} skipping: [np0005626459.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch ls output to log file] ********************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph config] ****************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "config", "dump", "-f", "json"], "delta": "0:00:02.435862", "end": "2026-02-23 09:42:14.745981", "msg": "", "rc": 0, "start": "2026-02-23 09:42:12.310119", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626463\",\"location_type\":\"host\",\"location_value\":\"np0005626463\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626465\",\"location_type\":\"host\",\"location_value\":\"np0005626465\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709082009\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626466\",\"location_type\":\"host\",\"location_value\":\"np0005626466\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]", "stdout_lines": ["", "[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626463\",\"location_type\":\"host\",\"location_value\":\"np0005626463\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626465\",\"location_type\":\"host\",\"location_value\":\"np0005626465\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709082009\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626466\",\"location_type\":\"host\",\"location_value\":\"np0005626466\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]"]} TASK [ceph_migrate : Print Ceph config dump] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph config dump output to log file] ***************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Orch Host Map] *********************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "host", "ls", "-f", "json"], "delta": "0:00:02.760235", "end": "2026-02-23 09:42:18.087086", "msg": "", "rc": 0, "start": "2026-02-23 09:42:15.326851", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005626459.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.104\", \"hostname\": \"np0005626460.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.105\", \"hostname\": \"np0005626461.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.106\", \"hostname\": \"np0005626463.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005626465.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005626466.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005626459.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.104\", \"hostname\": \"np0005626460.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.105\", \"hostname\": \"np0005626461.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.106\", \"hostname\": \"np0005626463.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005626465.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005626466.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}]"]} TASK [ceph_migrate : Load nodes] *********************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"nds": [{"addr": "192.168.122.103", "hostname": "np0005626459.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.104", "hostname": "np0005626460.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.105", "hostname": "np0005626461.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.106", "hostname": "np0005626463.localdomain", "labels": ["osd"], "status": ""}, {"addr": "192.168.122.107", "hostname": "np0005626465.localdomain", "labels": ["osd"], "status": ""}, {"addr": "192.168.122.108", "hostname": "np0005626466.localdomain", "labels": ["osd"], "status": ""}]}, "changed": false} TASK [ceph_migrate : Load hostmap List] **************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"hostmap": {"np0005626459.localdomain": ["_admin", "mon", "mgr"], "np0005626460.localdomain": ["_admin", "mon", "mgr"], "np0005626461.localdomain": ["_admin", "mon", "mgr"], "np0005626463.localdomain": ["osd"], "np0005626465.localdomain": ["osd"], "np0005626466.localdomain": ["osd"]}}, "changed": false} TASK [ceph_migrate : Print Host Map] ******************************************* skipping: [np0005626459.localdomain] => (item=np0005626459.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626459.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626460.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626460.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626461.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626461.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626463.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626463.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626465.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626465.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626466.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626466.localdomain"} skipping: [np0005626459.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch host ls output to log file] **************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph monmap and load data] **************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "mon", "dump", "-f", "json"], "delta": "0:00:02.627247", "end": "2026-02-23 09:42:21.406793", "msg": "", "rc": 0, "start": "2026-02-23 09:42:18.779546", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\ndumped monmap epoch 3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /var/lib/ceph/f1fea371-cb69-578d-a3d0-b5c472a84b46/mon.np0005626459/config", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "dumped monmap epoch 3"], "stdout": "\n{\"epoch\":3,\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"modified\":\"2026-02-23T07:38:26.404272Z\",\"created\":\"2026-02-23T07:36:01.997603Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005626459\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005626461\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005626460\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}", "stdout_lines": ["", "{\"epoch\":3,\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"modified\":\"2026-02-23T07:38:26.404272Z\",\"created\":\"2026-02-23T07:36:01.997603Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005626459\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005626461\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005626460\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}"]} TASK [ceph_migrate : Get Monmap] *********************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mon_dump": {"created": "2026-02-23T07:36:01.997603Z", "disallowed_leaders: ": "", "election_strategy": 1, "epoch": 3, "features": {"optional": [], "persistent": ["kraken", "luminous", "mimic", "osdmap-prune", "nautilus", "octopus", "pacific", "elector-pinging", "quincy", "reef"]}, "fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", "min_mon_release": 18, "min_mon_release_name": "reef", "modified": "2026-02-23T07:38:26.404272Z", "mons": [{"addr": "172.18.0.103:6789/0", "crush_location": "{}", "name": "np0005626459", "priority": 0, "public_addr": "172.18.0.103:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.103:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.103:6789", "nonce": 0, "type": "v1"}]}, "rank": 0, "weight": 0}, {"addr": "172.18.0.105:6789/0", "crush_location": "{}", "name": "np0005626461", "priority": 0, "public_addr": "172.18.0.105:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.105:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.105:6789", "nonce": 0, "type": "v1"}]}, "rank": 1, "weight": 0}, {"addr": "172.18.0.104:6789/0", "crush_location": "{}", "name": "np0005626460", "priority": 0, "public_addr": "172.18.0.104:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.104:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.104:6789", "nonce": 0, "type": "v1"}]}, "rank": 2, "weight": 0}], "quorum": [0, 1, 2], "removed_ranks: ": "", "stretch_mode": false, "tiebreaker_mon": ""}}, "changed": false} TASK [ceph_migrate : Print monmap] ********************************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph mon dump output to log file] ******************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Load nodes to decommission] ******************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"decomm_nodes": ["np0005626459.localdomain", "np0005626460.localdomain", "np0005626461.localdomain"]}, "changed": false} TASK [ceph_migrate : Load target nodes] **************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"target_nodes": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "changed": false} TASK [ceph_migrate : Print target nodes] *************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Print decomm_nodes] *************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : ansible.builtin.fail if input is not provided] ************ skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph is undefined or ceph | length == 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get cluster health] *************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if health is HEALTH_WARN || HEALTH_ERR] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph.health.status == 'HEALTH_WARN' or ceph.health.status == 'HEALTH_ERR'", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : PgMap] **************************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if PGs are not in active+clean state] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "pgstate != 'active+clean'", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : OSDMap] *************************************************** ok: [np0005626459.localdomain] => { "msg": "100.0" } TASK [ceph_migrate : ansible.builtin.fail if there is an unacceptable OSDs number] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "pct | float < 100", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MonMap] *************************************************** skipping: [np0005626459.localdomain] => {"false_condition": "check_ceph_release | default(false) | bool"} TASK [ceph_migrate : ansible.builtin.fail if Ceph <= Quincy] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "check_ceph_release | default(false) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Mons in quorum] ******************************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if Mons are not in quorum] *********** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph.monmap.num_mons < decomm_nodes | length", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : is Ceph Mgr available] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if Mgr is not available] ************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not ceph.mgrmap.available | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : in progress events] *************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if there are in progress events] ***** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph.progress_events | length > 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Dump Ceph Status] ***************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : set container image base in ceph configuration] *********** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_base", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest"], "delta": "0:00:00.704955", "end": "2026-02-23 09:42:23.548367", "msg": "", "rc": 0, "start": "2026-02-23 09:42:22.843412", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : set alertmanager container image in ceph configuration] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set grafana container image in ceph configuration] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set node-exporter container image in ceph configuration] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set prometheus container image in ceph configuration] ***** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set haproxy container image in ceph configuration] ******** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_haproxy", "registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest"], "delta": "0:00:00.696280", "end": "2026-02-23 09:42:24.925111", "msg": "", "rc": 0, "start": "2026-02-23 09:42:24.228831", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Set keepalived container image in ceph configuration] ***** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_keepalived", "registry.redhat.io/rhceph/keepalived-rhel9:latest"], "delta": "0:00:00.820886", "end": "2026-02-23 09:42:26.279080", "msg": "", "rc": 0, "start": "2026-02-23 09:42:25.458194", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Update firewall rules on the target nodes] **************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005626459.localdomain => (item=np0005626463.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005626459.localdomain => (item=np0005626465.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005626459.localdomain => (item=np0005626466.localdomain) TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005626459.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005626459.localdomain -> np0005626463.localdomain(192.168.122.106)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005626459.localdomain -> np0005626463.localdomain(192.168.122.106)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Mon 2026-02-23 07:47:49 UTC", "ActiveEnterTimestampMonotonic": "4309443985", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target systemd-journald.socket system.slice basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Mon 2026-02-23 07:47:49 UTC", "AssertTimestampMonotonic": "4309357241", "Before": "multi-user.target network-pre.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "30051000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2026-02-23 07:47:49 UTC", "ConditionTimestampMonotonic": "4309357240", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Mon 2026-02-23 07:47:49 UTC", "ExecMainExitTimestampMonotonic": "4309443734", "ExecMainPID": "42311", "ExecMainStartTimestamp": "Mon 2026-02-23 07:47:49 UTC", "ExecMainStartTimestampMonotonic": "4309358798", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2026-02-23 07:47:49 UTC", "InactiveExitTimestampMonotonic": "4309359006", "InvocationID": "67eb9f03ab1844798d1dfa4b4b97b793", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Mon 2026-02-23 07:47:49 UTC", "StateChangeTimestampMonotonic": "4309443985", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005626459.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005626459.localdomain -> np0005626465.localdomain(192.168.122.107)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005626459.localdomain -> np0005626465.localdomain(192.168.122.107)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ActiveEnterTimestampMonotonic": "4300077878", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket basic.target system.slice sysinit.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Mon 2026-02-23 07:47:48 UTC", "AssertTimestampMonotonic": "4299980771", "Before": "shutdown.target multi-user.target network-pre.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "32776000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ConditionTimestampMonotonic": "4299980770", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ExecMainExitTimestampMonotonic": "4300077689", "ExecMainPID": "42793", "ExecMainStartTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ExecMainStartTimestampMonotonic": "4299982117", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2026-02-23 07:47:48 UTC", "InactiveExitTimestampMonotonic": "4299982342", "InvocationID": "7957de6a5af841e6b96e3af5872dcbe9", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Mon 2026-02-23 07:47:48 UTC", "StateChangeTimestampMonotonic": "4300077878", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005626459.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005626459.localdomain -> np0005626466.localdomain(192.168.122.108)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005626459.localdomain -> np0005626466.localdomain(192.168.122.108)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ActiveEnterTimestampMonotonic": "4309664903", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target systemd-journald.socket system.slice", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Mon 2026-02-23 07:47:48 UTC", "AssertTimestampMonotonic": "4309588111", "Before": "multi-user.target network-pre.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "15703000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ConditionTimestampMonotonic": "4309588110", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ExecMainExitTimestampMonotonic": "4309664692", "ExecMainPID": "42476", "ExecMainStartTimestamp": "Mon 2026-02-23 07:47:48 UTC", "ExecMainStartTimestampMonotonic": "4309597637", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2026-02-23 07:47:48 UTC", "InactiveExitTimestampMonotonic": "4309597880", "InvocationID": "5b6fe26825d840c3a11abb5bd3faf6e2", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target system.slice", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Mon 2026-02-23 07:47:48 UTC", "StateChangeTimestampMonotonic": "4309664903", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Get ceph_cli] ********************************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the dashboard port] *********************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the dashboard ssl port] ******************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Disable mgr dashboard module (restart)] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Enable mgr dashboard module (restart)] ******************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the prometheus server port] *************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the prometheus server address] ************************ skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Enable prometheus module] ********************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => (item=['np0005626463.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005626463.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=['np0005626465.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005626465.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=['np0005626466.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005626466.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : MONITORING - Load Spec from the orchestrator] ************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the loaded data] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Update the Monitoring Stack spec definition] ************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : MONITORING - wait daemons] ******************************** skipping: [np0005626459.localdomain] => (item=grafana) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "grafana", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=prometheus) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "prometheus", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=alertmanager) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "alertmanager", "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Sleep before moving to the next daemon] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MDS - Load Spec from the orchestrator] ******************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mds_spec": {"service_name": "mds.mds", "service_type": "mds", "spec": {}}}, "changed": false} TASK [ceph_migrate : Print the resulting MDS spec] ***************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626459.localdomain", "mds"], "delta": "0:00:00.741748", "end": "2026-02-23 09:42:35.568859", "item": ["np0005626459.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:42:34.827111", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005626459.localdomain", "stdout_lines": ["Added label mds to host np0005626459.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626460.localdomain", "mds"], "delta": "0:00:00.669782", "end": "2026-02-23 09:42:36.751995", "item": ["np0005626460.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:42:36.082213", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005626460.localdomain", "stdout_lines": ["Added label mds to host np0005626460.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626461.localdomain", "mds"], "delta": "0:00:00.668202", "end": "2026-02-23 09:42:37.925444", "item": ["np0005626461.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:42:37.257242", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005626461.localdomain", "stdout_lines": ["Added label mds to host np0005626461.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626463.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626463.localdomain", "mds"], "delta": "0:00:00.630108", "end": "2026-02-23 09:42:39.038815", "item": ["np0005626463.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:42:38.408707", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005626463.localdomain", "stdout_lines": ["Added label mds to host np0005626463.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626465.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626465.localdomain", "mds"], "delta": "0:00:00.692260", "end": "2026-02-23 09:42:40.250340", "item": ["np0005626465.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:42:39.558080", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005626465.localdomain", "stdout_lines": ["Added label mds to host np0005626465.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626466.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626466.localdomain", "mds"], "delta": "0:00:00.707825", "end": "2026-02-23 09:42:41.461984", "item": ["np0005626466.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:42:40.754159", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005626466.localdomain", "stdout_lines": ["Added label mds to host np0005626466.localdomain"]} TASK [ceph_migrate : Update the MDS Daemon spec definition] ******************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mds:/home/tripleo-admin/mds:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mds"], "delta": "0:00:00.622850", "end": "2026-02-23 09:42:42.811042", "rc": 0, "start": "2026-02-23 09:42:42.188192", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mds.mds update...", "stdout_lines": ["Scheduled mds.mds update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Wait for the orchestrator to process the spec] ************ Pausing for 30 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2026-02-23 09:42:42.941832", "stderr": "", "stdout": "Paused for 30.0 seconds", "stop": "2026-02-23 09:43:12.946440", "user_input": ""} TASK [ceph_migrate : Reload the updated mdsmap] ******************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "fs", "status", "cephfs", "-f", "json"], "delta": "0:00:00.695747", "end": "2026-02-23 09:43:14.150676", "msg": "", "rc": 0, "start": "2026-02-23 09:43:13.454929", "stderr": "", "stderr_lines": [], "stdout": "\n{\"clients\": [{\"clients\": 0, \"fs\": \"cephfs\"}], \"mds_version\": [{\"daemon\": [\"mds.np0005626461.jyrtpf\", \"mds.np0005626459.bntrsx\", \"mds.np0005626460.jdqmoz\", \"mds.np0005626465.drvnoy\", \"mds.np0005626466.vaywlp\", \"mds.np0005626463.qcthuc\"], \"version\": \"ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable)\"}], \"mdsmap\": [{\"caps\": 0, \"dirs\": 12, \"dns\": 10, \"inos\": 13, \"name\": \"mds.np0005626461.jyrtpf\", \"rank\": 0, \"rate\": 0, \"state\": \"active\"}, {\"name\": \"mds.np0005626459.bntrsx\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626460.jdqmoz\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626465.drvnoy\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626466.vaywlp\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626463.qcthuc\", \"state\": \"standby\"}], \"pools\": [{\"avail\": 14020154368, \"id\": 7, \"name\": \"manila_metadata\", \"type\": \"metadata\", \"used\": 98304}, {\"avail\": 14020154368, \"id\": 6, \"name\": \"manila_data\", \"type\": \"data\", \"used\": 0}]}", "stdout_lines": ["", "{\"clients\": [{\"clients\": 0, \"fs\": \"cephfs\"}], \"mds_version\": [{\"daemon\": [\"mds.np0005626461.jyrtpf\", \"mds.np0005626459.bntrsx\", \"mds.np0005626460.jdqmoz\", \"mds.np0005626465.drvnoy\", \"mds.np0005626466.vaywlp\", \"mds.np0005626463.qcthuc\"], \"version\": \"ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable)\"}], \"mdsmap\": [{\"caps\": 0, \"dirs\": 12, \"dns\": 10, \"inos\": 13, \"name\": \"mds.np0005626461.jyrtpf\", \"rank\": 0, \"rate\": 0, \"state\": \"active\"}, {\"name\": \"mds.np0005626459.bntrsx\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626460.jdqmoz\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626465.drvnoy\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626466.vaywlp\", \"state\": \"standby\"}, {\"name\": \"mds.np0005626463.qcthuc\", \"state\": \"standby\"}], \"pools\": [{\"avail\": 14020154368, \"id\": 7, \"name\": \"manila_metadata\", \"type\": \"metadata\", \"used\": 98304}, {\"avail\": 14020154368, \"id\": 6, \"name\": \"manila_data\", \"type\": \"data\", \"used\": 0}]}"]} TASK [ceph_migrate : Get MDS Daemons] ****************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mds_daemons": {"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["mds.np0005626461.jyrtpf", "mds.np0005626459.bntrsx", "mds.np0005626460.jdqmoz", "mds.np0005626465.drvnoy", "mds.np0005626466.vaywlp", "mds.np0005626463.qcthuc"], "version": "ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "mds.np0005626461.jyrtpf", "rank": 0, "rate": 0, "state": "active"}, {"name": "mds.np0005626459.bntrsx", "state": "standby"}, {"name": "mds.np0005626460.jdqmoz", "state": "standby"}, {"name": "mds.np0005626465.drvnoy", "state": "standby"}, {"name": "mds.np0005626466.vaywlp", "state": "standby"}, {"name": "mds.np0005626463.qcthuc", "state": "standby"}], "pools": [{"avail": 14020154368, "id": 7, "name": "manila_metadata", "type": "metadata", "used": 98304}, {"avail": 14020154368, "id": 6, "name": "manila_data", "type": "data", "used": 0}]}}, "changed": false} TASK [ceph_migrate : Print Daemons] ******************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Get MDS daemons that are not part of decomm nodes] ******** skipping: [np0005626459.localdomain] => (item={'caps': 0, 'dirs': 12, 'dns': 10, 'inos': 13, 'name': 'mds.np0005626461.jyrtpf', 'rank': 0, 'rate': 0, 'state': 'active'}) => {"ansible_loop_var": "item", "changed": false, "false_condition": "item.state == \"standby\"", "item": {"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "mds.np0005626461.jyrtpf", "rank": 0, "rate": 0, "state": "active"}, "skip_reason": "Conditional result was False"} ok: [np0005626459.localdomain] => (item={'name': 'mds.np0005626459.bntrsx', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005626459.bntrsx", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005626459.bntrsx", "state": "standby"}} ok: [np0005626459.localdomain] => (item={'name': 'mds.np0005626460.jdqmoz', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005626460.jdqmoz", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005626460.jdqmoz", "state": "standby"}} ok: [np0005626459.localdomain] => (item={'name': 'mds.np0005626465.drvnoy', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005626465.drvnoy", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005626465.drvnoy", "state": "standby"}} ok: [np0005626459.localdomain] => (item={'name': 'mds.np0005626466.vaywlp', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005626466.vaywlp", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005626466.vaywlp", "state": "standby"}} ok: [np0005626459.localdomain] => (item={'name': 'mds.np0005626463.qcthuc', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005626463.qcthuc", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005626463.qcthuc", "state": "standby"}} TASK [ceph_migrate : Affinity daemon selected] ********************************* ok: [np0005626459.localdomain] => { "msg": { "name": "mds.np0005626463.qcthuc", "state": "standby" } } TASK [ceph_migrate : Set MDS affinity] ***************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring config set mds.np0005626463.qcthuc mds_join_fs cephfs", "delta": "0:00:00.649828", "end": "2026-02-23 09:43:15.609313", "msg": "", "rc": 0, "start": "2026-02-23 09:43:14.959485", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626459.localdomain", "mds"], "delta": "0:00:00.724685", "end": "2026-02-23 09:43:16.956420", "item": ["np0005626459.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:43:16.231735", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005626459.localdomain", "stdout_lines": ["Removed label mds from host np0005626459.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626460.localdomain", "mds"], "delta": "0:00:00.760598", "end": "2026-02-23 09:43:18.220594", "item": ["np0005626460.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:43:17.459996", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005626460.localdomain", "stdout_lines": ["Removed label mds from host np0005626460.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626461.localdomain", "mds"], "delta": "0:00:00.691264", "end": "2026-02-23 09:43:19.500327", "item": ["np0005626461.localdomain", "mds"], "msg": "", "rc": 0, "start": "2026-02-23 09:43:18.809063", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005626461.localdomain", "stdout_lines": ["Removed label mds from host np0005626461.localdomain"]} TASK [ceph_migrate : Wait daemons] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mds] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mds", "-f", "json"], "delta": "0:00:00.648851", "end": "2026-02-23 09:43:20.772136", "msg": "", "rc": 0, "start": "2026-02-23 09:43:20.123285", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"e0e241661659\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.05%\", \"created\": \"2026-02-23T07:57:47.437213Z\", \"daemon_id\": \"mds.np0005626459.bntrsx\", \"daemon_name\": \"mds.mds.np0005626459.bntrsx\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T07:57:47.524764Z daemon:mds.mds.np0005626459.bntrsx [INFO] \\\"Deployed mds.mds.np0005626459.bntrsx on host 'np0005626459.localdomain'\\\"\"], \"hostname\": \"np0005626459.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:16.319992Z\", \"memory_usage\": 25522339, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T07:57:47.339807Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"7019b3131bbb\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2026-02-23T07:57:50.171856Z\", \"daemon_id\": \"mds.np0005626460.jdqmoz\", \"daemon_name\": \"mds.mds.np0005626460.jdqmoz\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T07:57:50.380341Z daemon:mds.mds.np0005626460.jdqmoz [INFO] \\\"Deployed mds.mds.np0005626460.jdqmoz on host 'np0005626460.localdomain'\\\"\"], \"hostname\": \"np0005626460.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:16.199784Z\", \"memory_usage\": 25165824, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T07:57:50.063669Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"874714f4a438\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.17%\", \"created\": \"2026-02-23T07:57:45.114925Z\", \"daemon_id\": \"mds.np0005626461.jyrtpf\", \"daemon_name\": \"mds.mds.np0005626461.jyrtpf\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T07:57:45.209741Z daemon:mds.mds.np0005626461.jyrtpf [INFO] \\\"Deployed mds.mds.np0005626461.jyrtpf on host 'np0005626461.localdomain'\\\"\"], \"hostname\": \"np0005626461.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-23T09:42:15.719779Z\", \"memory_usage\": 26895974, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T07:57:45.005141Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"35c397f376b9\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"4.26%\", \"created\": \"2026-02-23T09:42:49.899462Z\", \"daemon_id\": \"mds.np0005626463.qcthuc\", \"daemon_name\": \"mds.mds.np0005626463.qcthuc\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T09:42:49.973239Z daemon:mds.mds.np0005626463.qcthuc [INFO] \\\"Deployed mds.mds.np0005626463.qcthuc on host 'np0005626463.localdomain'\\\"\"], \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:52.543031Z\", \"memory_usage\": 14155776, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T09:42:49.796808Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"4d0f1a071d29\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.35%\", \"created\": \"2026-02-23T09:42:47.681983Z\", \"daemon_id\": \"mds.np0005626465.drvnoy\", \"daemon_name\": \"mds.mds.np0005626465.drvnoy\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T09:42:47.775411Z daemon:mds.mds.np0005626465.drvnoy [INFO] \\\"Deployed mds.mds.np0005626465.drvnoy on host 'np0005626465.localdomain'\\\"\"], \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:52.200861Z\", \"memory_usage\": 14302576, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T09:42:47.562708Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"13d86d0fc7f1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.20%\", \"created\": \"2026-02-23T09:42:45.420811Z\", \"daemon_id\": \"mds.np0005626466.vaywlp\", \"daemon_name\": \"mds.mds.np0005626466.vaywlp\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T09:42:45.499374Z daemon:mds.mds.np0005626466.vaywlp [INFO] \\\"Deployed mds.mds.np0005626466.vaywlp on host 'np0005626466.localdomain'\\\"\"], \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:52.590302Z\", \"memory_usage\": 14355005, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T09:42:45.318918Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"e0e241661659\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.05%\", \"created\": \"2026-02-23T07:57:47.437213Z\", \"daemon_id\": \"mds.np0005626459.bntrsx\", \"daemon_name\": \"mds.mds.np0005626459.bntrsx\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T07:57:47.524764Z daemon:mds.mds.np0005626459.bntrsx [INFO] \\\"Deployed mds.mds.np0005626459.bntrsx on host 'np0005626459.localdomain'\\\"\"], \"hostname\": \"np0005626459.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:16.319992Z\", \"memory_usage\": 25522339, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T07:57:47.339807Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"7019b3131bbb\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2026-02-23T07:57:50.171856Z\", \"daemon_id\": \"mds.np0005626460.jdqmoz\", \"daemon_name\": \"mds.mds.np0005626460.jdqmoz\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T07:57:50.380341Z daemon:mds.mds.np0005626460.jdqmoz [INFO] \\\"Deployed mds.mds.np0005626460.jdqmoz on host 'np0005626460.localdomain'\\\"\"], \"hostname\": \"np0005626460.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:16.199784Z\", \"memory_usage\": 25165824, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T07:57:50.063669Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"874714f4a438\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.17%\", \"created\": \"2026-02-23T07:57:45.114925Z\", \"daemon_id\": \"mds.np0005626461.jyrtpf\", \"daemon_name\": \"mds.mds.np0005626461.jyrtpf\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T07:57:45.209741Z daemon:mds.mds.np0005626461.jyrtpf [INFO] \\\"Deployed mds.mds.np0005626461.jyrtpf on host 'np0005626461.localdomain'\\\"\"], \"hostname\": \"np0005626461.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-23T09:42:15.719779Z\", \"memory_usage\": 26895974, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T07:57:45.005141Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"35c397f376b9\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"4.26%\", \"created\": \"2026-02-23T09:42:49.899462Z\", \"daemon_id\": \"mds.np0005626463.qcthuc\", \"daemon_name\": \"mds.mds.np0005626463.qcthuc\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T09:42:49.973239Z daemon:mds.mds.np0005626463.qcthuc [INFO] \\\"Deployed mds.mds.np0005626463.qcthuc on host 'np0005626463.localdomain'\\\"\"], \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:52.543031Z\", \"memory_usage\": 14155776, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T09:42:49.796808Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"4d0f1a071d29\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.35%\", \"created\": \"2026-02-23T09:42:47.681983Z\", \"daemon_id\": \"mds.np0005626465.drvnoy\", \"daemon_name\": \"mds.mds.np0005626465.drvnoy\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T09:42:47.775411Z daemon:mds.mds.np0005626465.drvnoy [INFO] \\\"Deployed mds.mds.np0005626465.drvnoy on host 'np0005626465.localdomain'\\\"\"], \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:52.200861Z\", \"memory_usage\": 14302576, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T09:42:47.562708Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"13d86d0fc7f1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.20%\", \"created\": \"2026-02-23T09:42:45.420811Z\", \"daemon_id\": \"mds.np0005626466.vaywlp\", \"daemon_name\": \"mds.mds.np0005626466.vaywlp\", \"daemon_type\": \"mds\", \"events\": [\"2026-02-23T09:42:45.499374Z daemon:mds.mds.np0005626466.vaywlp [INFO] \\\"Deployed mds.mds.np0005626466.vaywlp on host 'np0005626466.localdomain'\\\"\"], \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:42:52.590302Z\", \"memory_usage\": 14355005, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2026-02-23T09:42:45.318918Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Sleep before moving to the next phase] ******************** Pausing for 30 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2026-02-23 09:43:20.935756", "stderr": "", "stdout": "Paused for 30.01 seconds", "stop": "2026-02-23 09:43:50.942702", "user_input": ""} TASK [ceph_migrate : Get ceph_cli] ********************************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if RGW VIPs are not defined] ************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => (item=['np0005626463.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005626463.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=['np0005626465.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005626465.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => (item=['np0005626466.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005626466.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005626459.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : RGW - Load Spec from the orchestrator] ******************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the loaded data] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Apply ceph rgw keystone config] *************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Update the RGW spec definition] *************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Create the Ingress Daemon spec definition for RGW] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Wait for cephadm to redeploy] ***************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : RGW - wait daemons] *************************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Setup a Ceph client to the first node] ******************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_client.yaml for np0005626459.localdomain TASK [ceph_migrate : TMP_CLIENT - Patch os-net-config config and setup a tmp client IP] *** changed: [np0005626459.localdomain] => {"backup": "/etc/os-net-config/tripleo_config.yaml.490224.2026-02-23@09:43:52~", "changed": true, "msg": "line added and ownership, perms or SE linux context changed"} TASK [ceph_migrate : TMP_CLIENT - Refresh os-net-config] *********************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["os-net-config", "-c", "/etc/os-net-config/tripleo_config.yaml"], "delta": "0:00:07.417586", "end": "2026-02-23 09:44:00.054208", "msg": "", "rc": 0, "start": "2026-02-23 09:43:52.636622", "stderr": "", "stderr_lines": [], "stdout": "2026-02-23 09:43:53.630 ERROR os_net_config.execute stderr : WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.\nWARN : [ifdown] 'network-scripts' will be removed from distribution in near future.\nWARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.\n\n2026-02-23 09:43:59.992 ERROR os_net_config.execute stderr : WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated.\nWARN : [ifup] 'network-scripts' will be removed from distribution in near future.\nWARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.", "stdout_lines": ["2026-02-23 09:43:53.630 ERROR os_net_config.execute stderr : WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.", "WARN : [ifdown] 'network-scripts' will be removed from distribution in near future.", "WARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.", "", "2026-02-23 09:43:59.992 ERROR os_net_config.execute stderr : WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated.", "WARN : [ifup] 'network-scripts' will be removed from distribution in near future.", "WARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well."]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005626459.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005626459.localdomain] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005626459.localdomain] => {"changed": false, "examined": 4, "files": [{"atime": 1771833430.228227, "ctime": 1771833429.2382007, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384497, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771832352.7301307, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771833430.2392273, "ctime": 1771833429.2382007, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384496, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771832205.4769254, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771833472.164338, "ctime": 1771833470.0762827, "dev": 64516, "gid": 167, "gr_name": "", "inode": 142738049, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771833469.713273, "nlink": 1, "path": "/etc/ceph/ceph.client.openstack.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771833473.2673671, "ctime": 1771833470.994307, "dev": 64516, "gid": 167, "gr_name": "", "inode": 167852769, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771833470.6812987, "nlink": 1, "path": "/etc/ceph/ceph.client.manila.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 4, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005626459.localdomain] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 271, 'inode': 159384497, 'dev': 64516, 'nlink': 1, 'atime': 1771833430.228227, 'mtime': 1771832352.7301307, 'ctime': 1771833429.2382007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "00be6682e39722cc7ebf9f74611435726ea0928d", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1771833430.228227, "ctime": 1771833429.2382007, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384497, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771832352.7301307, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "f655f3da832a63be9061f4d35b3955af", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 271, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005626459.localdomain] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 159384496, 'dev': 64516, 'nlink': 1, 'atime': 1771833430.2392273, 'mtime': 1771832205.4769254, 'ctime': 1771833429.2382007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "fd04ead011675324e6ecfc5b53f1d9fb294b7881", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1771833430.2392273, "ctime": 1771833429.2382007, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 159384496, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771832205.4769254, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "31f18ddffd881a7d7131636a0e4ce49c", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} changed: [np0005626459.localdomain] => (item={'path': '/etc/ceph/ceph.client.openstack.keyring', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 167, 'gid': 167, 'size': 231, 'inode': 142738049, 'dev': 64516, 'nlink': 1, 'atime': 1771833472.164338, 'mtime': 1771833469.713273, 'ctime': 1771833470.0762827, 'gr_name': '', 'pw_name': '', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "bb97f2335ebfccbfb2bd8d50bbb589ce7e034c5d", "dest": "/home/tripleo-admin/ceph_client/ceph.client.openstack.keyring", "gid": 0, "group": "root", "item": {"atime": 1771833472.164338, "ctime": 1771833470.0762827, "dev": 64516, "gid": 167, "gr_name": "", "inode": 142738049, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771833469.713273, "nlink": 1, "path": "/etc/ceph/ceph.client.openstack.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "6eff6e7633a83f445d9ead946e7ea469", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 231, "src": "/etc/ceph/ceph.client.openstack.keyring", "state": "file", "uid": 0} changed: [np0005626459.localdomain] => (item={'path': '/etc/ceph/ceph.client.manila.keyring', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 167, 'gid': 167, 'size': 153, 'inode': 167852769, 'dev': 64516, 'nlink': 1, 'atime': 1771833473.2673671, 'mtime': 1771833470.6812987, 'ctime': 1771833470.994307, 'gr_name': '', 'pw_name': '', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "04bfb06bbb9d2445e353d8ca8467b47fb8316e81", "dest": "/home/tripleo-admin/ceph_client/ceph.client.manila.keyring", "gid": 0, "group": "root", "item": {"atime": 1771833473.2673671, "ctime": 1771833470.994307, "dev": 64516, "gid": 167, "gr_name": "", "inode": 167852769, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771833470.6812987, "nlink": 1, "path": "/etc/ceph/ceph.client.manila.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "6fea08873d96fb138c2ad3ac95cfaa87", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 153, "src": "/etc/ceph/ceph.client.manila.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : Render global ceph.conf] ********************************** changed: [np0005626459.localdomain] => {"changed": true, "checksum": "35652e81851ba988308532419ad65d6d9c0476ad", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "md5sum": "46449f719b281efc7c936ea24835c325", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 142, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771839843.9534335-62676-275366235222637/source", "state": "file", "uid": 0} TASK [ceph_migrate : MGR - Migrate RBD node] *********************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mgr.yaml for np0005626459.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MGR - Setup Mon/Mgr label to the target node] ************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005626459.localdomain TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005626459.localdomain] => (item=['np0005626463.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626463.localdomain", "mgr"], "delta": "0:00:00.773950", "end": "2026-02-23 09:44:06.512593", "item": ["np0005626463.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:05.738643", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005626463.localdomain", "stdout_lines": ["Added label mgr to host np0005626463.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626465.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626465.localdomain", "mgr"], "delta": "0:00:01.450701", "end": "2026-02-23 09:44:08.487466", "item": ["np0005626465.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:07.036765", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005626465.localdomain", "stdout_lines": ["Added label mgr to host np0005626465.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626466.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626466.localdomain", "mgr"], "delta": "0:00:00.651557", "end": "2026-02-23 09:44:09.658286", "item": ["np0005626466.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:09.006729", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005626466.localdomain", "stdout_lines": ["Added label mgr to host np0005626466.localdomain"]} TASK [ceph_migrate : MGR - Load Spec from the orchestrator] ******************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mgr_spec": {"service_name": "mgr", "service_type": "mgr", "spec": {}}}, "changed": false} TASK [ceph_migrate : Update the MGR Daemon spec definition] ******************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mgr:/home/tripleo-admin/mgr:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mgr"], "delta": "0:00:00.700819", "end": "2026-02-23 09:44:10.974854", "rc": 0, "start": "2026-02-23 09:44:10.274035", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mgr update...", "stdout_lines": ["Scheduled mgr update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MGR - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mgr] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mgr", "-f", "json"], "delta": "0:00:00.658355", "end": "2026-02-23 09:44:12.362429", "msg": "", "rc": 0, "start": "2026-02-23 09:44:11.704074", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"928db2dcb033\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.50%\", \"created\": \"2026-02-23T07:36:09.006122Z\", \"daemon_id\": \"np0005626459.pmtxxl\", \"daemon_name\": \"mgr.np0005626459.pmtxxl\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-23T07:39:15.850026Z daemon:mgr.np0005626459.pmtxxl [INFO] \\\"Reconfigured mgr.np0005626459.pmtxxl on host 'np0005626459.localdomain'\\\"\"], \"hostname\": \"np0005626459.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-23T09:43:24.279005Z\", \"memory_usage\": 540226355, \"ports\": [9283, 8765], \"service_name\": \"mgr\", \"started\": \"2026-02-23T07:36:08.860717Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"bd518ce09e06\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-23T07:38:33.665542Z\", \"daemon_id\": \"np0005626460.fyrady\", \"daemon_name\": \"mgr.np0005626460.fyrady\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-23T07:38:33.753093Z daemon:mgr.np0005626460.fyrady [INFO] \\\"Deployed mgr.np0005626460.fyrady on host 'np0005626460.localdomain'\\\"\"], \"hostname\": \"np0005626460.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:43:44.156628Z\", \"memory_usage\": 473012633, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-23T07:38:33.523621Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"ada62cf9a11c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-23T07:38:30.202893Z\", \"daemon_id\": \"np0005626461.lrfquh\", \"daemon_name\": \"mgr.np0005626461.lrfquh\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-23T07:38:31.542378Z daemon:mgr.np0005626461.lrfquh [INFO] \\\"Deployed mgr.np0005626461.lrfquh on host 'np0005626461.localdomain'\\\"\", \"2026-02-23T07:39:20.085529Z daemon:mgr.np0005626461.lrfquh [INFO] \\\"Reconfigured mgr.np0005626461.lrfquh on host 'np0005626461.localdomain'\\\"\"], \"hostname\": \"np0005626461.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:43:44.177834Z\", \"memory_usage\": 473536921, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-23T07:38:26.808826Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"928db2dcb033\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.50%\", \"created\": \"2026-02-23T07:36:09.006122Z\", \"daemon_id\": \"np0005626459.pmtxxl\", \"daemon_name\": \"mgr.np0005626459.pmtxxl\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-23T07:39:15.850026Z daemon:mgr.np0005626459.pmtxxl [INFO] \\\"Reconfigured mgr.np0005626459.pmtxxl on host 'np0005626459.localdomain'\\\"\"], \"hostname\": \"np0005626459.localdomain\", \"is_active\": true, \"last_refresh\": \"2026-02-23T09:43:24.279005Z\", \"memory_usage\": 540226355, \"ports\": [9283, 8765], \"service_name\": \"mgr\", \"started\": \"2026-02-23T07:36:08.860717Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"bd518ce09e06\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-23T07:38:33.665542Z\", \"daemon_id\": \"np0005626460.fyrady\", \"daemon_name\": \"mgr.np0005626460.fyrady\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-23T07:38:33.753093Z daemon:mgr.np0005626460.fyrady [INFO] \\\"Deployed mgr.np0005626460.fyrady on host 'np0005626460.localdomain'\\\"\"], \"hostname\": \"np0005626460.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:43:44.156628Z\", \"memory_usage\": 473012633, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-23T07:38:33.523621Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}, {\"container_id\": \"ada62cf9a11c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2026-02-23T07:38:30.202893Z\", \"daemon_id\": \"np0005626461.lrfquh\", \"daemon_name\": \"mgr.np0005626461.lrfquh\", \"daemon_type\": \"mgr\", \"events\": [\"2026-02-23T07:38:31.542378Z daemon:mgr.np0005626461.lrfquh [INFO] \\\"Deployed mgr.np0005626461.lrfquh on host 'np0005626461.localdomain'\\\"\", \"2026-02-23T07:39:20.085529Z daemon:mgr.np0005626461.lrfquh [INFO] \\\"Reconfigured mgr.np0005626461.lrfquh on host 'np0005626461.localdomain'\\\"\"], \"hostname\": \"np0005626461.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:43:44.177834Z\", \"memory_usage\": 473536921, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2026-02-23T07:38:26.808826Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Load Spec from the orchestrator] ******************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mon_spec": {"service_name": "mon", "service_type": "mon", "spec": {}}}, "changed": false} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626459.localdomain", "mon"], "delta": "0:00:00.686021", "end": "2026-02-23 09:44:13.768332", "item": ["np0005626459.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:13.082311", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005626459.localdomain", "stdout_lines": ["Added label mon to host np0005626459.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626459.localdomain", "_admin"], "delta": "0:00:00.678302", "end": "2026-02-23 09:44:14.936967", "item": ["np0005626459.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:14.258665", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005626459.localdomain", "stdout_lines": ["Added label _admin to host np0005626459.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626460.localdomain", "mon"], "delta": "0:00:00.647440", "end": "2026-02-23 09:44:16.137480", "item": ["np0005626460.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:15.490040", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005626460.localdomain", "stdout_lines": ["Added label mon to host np0005626460.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626460.localdomain", "_admin"], "delta": "0:00:00.652969", "end": "2026-02-23 09:44:17.342848", "item": ["np0005626460.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:16.689879", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005626460.localdomain", "stdout_lines": ["Added label _admin to host np0005626460.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626461.localdomain", "mon"], "delta": "0:00:00.772012", "end": "2026-02-23 09:44:18.643699", "item": ["np0005626461.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:17.871687", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005626461.localdomain", "stdout_lines": ["Added label mon to host np0005626461.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626461.localdomain", "_admin"], "delta": "0:00:00.690041", "end": "2026-02-23 09:44:19.878133", "item": ["np0005626461.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:19.188092", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005626461.localdomain", "stdout_lines": ["Added label _admin to host np0005626461.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626463.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626463.localdomain", "mon"], "delta": "0:00:00.701761", "end": "2026-02-23 09:44:21.120635", "item": ["np0005626463.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:20.418874", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005626463.localdomain", "stdout_lines": ["Added label mon to host np0005626463.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626463.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626463.localdomain", "_admin"], "delta": "0:00:00.837033", "end": "2026-02-23 09:44:22.535199", "item": ["np0005626463.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:21.698166", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005626463.localdomain", "stdout_lines": ["Added label _admin to host np0005626463.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626465.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626465.localdomain", "mon"], "delta": "0:00:00.696209", "end": "2026-02-23 09:44:23.689008", "item": ["np0005626465.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:22.992799", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005626465.localdomain", "stdout_lines": ["Added label mon to host np0005626465.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626465.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626465.localdomain", "_admin"], "delta": "0:00:00.710399", "end": "2026-02-23 09:44:24.904785", "item": ["np0005626465.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:24.194386", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005626465.localdomain", "stdout_lines": ["Added label _admin to host np0005626465.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626466.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626466.localdomain", "mon"], "delta": "0:00:00.680622", "end": "2026-02-23 09:44:26.103388", "item": ["np0005626466.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:25.422766", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005626466.localdomain", "stdout_lines": ["Added label mon to host np0005626466.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626466.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005626466.localdomain", "_admin"], "delta": "0:00:00.670313", "end": "2026-02-23 09:44:27.239117", "item": ["np0005626466.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:44:26.568804", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005626466.localdomain", "stdout_lines": ["Added label _admin to host np0005626466.localdomain"]} TASK [ceph_migrate : Normalize the mon spec to use labels] ********************* ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.703323", "end": "2026-02-23 09:44:28.518624", "rc": 0, "start": "2026-02-23 09:44:27.815301", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : RBD - wait new daemons to be available] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain => (item=np0005626463.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain => (item=np0005626465.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain => (item=np0005626466.localdomain) TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* FAILED - RETRYING: [np0005626459.localdomain]: wait for mon (200 retries left). FAILED - RETRYING: [np0005626459.localdomain]: wait for mon (199 retries left). FAILED - RETRYING: [np0005626459.localdomain]: wait for mon (198 retries left). changed: [np0005626459.localdomain] => {"attempts": 4, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626463", "-f", "json"], "delta": "0:00:00.647550", "end": "2026-02-23 09:44:50.055578", "msg": "", "rc": 0, "start": "2026-02-23 09:44:49.408028", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"081a8332e685\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.23%\", \"created\": \"2026-02-23T09:44:39.327826Z\", \"daemon_id\": \"np0005626463\", \"daemon_name\": \"mon.np0005626463\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:44:43.480973Z daemon:mon.np0005626463 [INFO] \\\"Deployed mon.np0005626463 on host 'np0005626463.localdomain'\\\"\"], \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:45.696762Z\", \"memory_request\": 2147483648, \"memory_usage\": 41555066, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:39.216952Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"081a8332e685\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.23%\", \"created\": \"2026-02-23T09:44:39.327826Z\", \"daemon_id\": \"np0005626463\", \"daemon_name\": \"mon.np0005626463\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:44:43.480973Z daemon:mon.np0005626463 [INFO] \\\"Deployed mon.np0005626463 on host 'np0005626463.localdomain'\\\"\"], \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:45.696762Z\", \"memory_request\": 2147483648, \"memory_usage\": 41555066, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:39.216952Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626465", "-f", "json"], "delta": "0:00:00.648857", "end": "2026-02-23 09:44:51.473078", "msg": "", "rc": 0, "start": "2026-02-23 09:44:50.824221", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"287dcf2f52ac\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.42%\", \"created\": \"2026-02-23T09:44:33.999188Z\", \"daemon_id\": \"np0005626465\", \"daemon_name\": \"mon.np0005626465\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:44:36.877603Z daemon:mon.np0005626465 [INFO] \\\"Deployed mon.np0005626465 on host 'np0005626465.localdomain'\\\"\"], \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:45.677848Z\", \"memory_request\": 2147483648, \"memory_usage\": 42152755, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:33.892574Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"287dcf2f52ac\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.42%\", \"created\": \"2026-02-23T09:44:33.999188Z\", \"daemon_id\": \"np0005626465\", \"daemon_name\": \"mon.np0005626465\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:44:36.877603Z daemon:mon.np0005626465 [INFO] \\\"Deployed mon.np0005626465 on host 'np0005626465.localdomain'\\\"\"], \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:45.677848Z\", \"memory_request\": 2147483648, \"memory_usage\": 42152755, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:33.892574Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626466", "-f", "json"], "delta": "0:00:00.605552", "end": "2026-02-23 09:44:52.661404", "msg": "", "rc": 0, "start": "2026-02-23 09:44:52.055852", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"c4c7b411729c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.57%\", \"created\": \"2026-02-23T09:44:31.421078Z\", \"daemon_id\": \"np0005626466\", \"daemon_name\": \"mon.np0005626466\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:44:31.499895Z daemon:mon.np0005626466 [INFO] \\\"Deployed mon.np0005626466 on host 'np0005626466.localdomain'\\\"\"], \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:45.958053Z\", \"memory_request\": 2147483648, \"memory_usage\": 42844815, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:31.327586Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"c4c7b411729c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.57%\", \"created\": \"2026-02-23T09:44:31.421078Z\", \"daemon_id\": \"np0005626466\", \"daemon_name\": \"mon.np0005626466\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:44:31.499895Z daemon:mon.np0005626466 [INFO] \\\"Deployed mon.np0005626466 on host 'np0005626466.localdomain'\\\"\"], \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:45.958053Z\", \"memory_request\": 2147483648, \"memory_usage\": 42844815, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:31.327586Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Migrate RBD node] *********************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005626459.localdomain => (item=['np0005626459.localdomain', 'np0005626463.localdomain']) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005626459.localdomain => (item=['np0005626460.localdomain', 'np0005626465.localdomain']) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005626459.localdomain => (item=['np0005626461.localdomain', 'np0005626466.localdomain']) TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005626459.localdomain] => { "msg": "Migrate mon: np0005626459.localdomain to node: np0005626463.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005626459.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.103"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.648900", "end": "2026-02-23 09:44:54.243435", "msg": "", "rc": 0, "start": "2026-02-23 09:44:53.594535", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":26,\"quorum\":[0,1,2,3,4,5],\"quorum_names\":[\"np0005626459\",\"np0005626461\",\"np0005626460\",\"np0005626466\",\"np0005626465\",\"np0005626463\"],\"quorum_age\":5,\"monmap\":{\"epoch\":6,\"min_mon_release_name\":\"reef\",\"num_mons\":6},\"osdmap\":{\"epoch\":80,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":593014784,\"bytes_avail\":44478976000,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":71,\"modified\":\"2026-02-23T09:44:31.153218+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005626463.wtksup\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005626465.hlpkwo\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005626466.nisqfq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":26,\"quorum\":[0,1,2,3,4,5],\"quorum_names\":[\"np0005626459\",\"np0005626461\",\"np0005626460\",\"np0005626466\",\"np0005626465\",\"np0005626463\"],\"quorum_age\":5,\"monmap\":{\"epoch\":6,\"min_mon_release_name\":\"reef\",\"num_mons\":6},\"osdmap\":{\"epoch\":80,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":593014784,\"bytes_avail\":44478976000,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":71,\"modified\":\"2026-02-23T09:44:31.153218+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005626463.wtksup\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005626465.hlpkwo\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005626466.nisqfq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "cur_mon != client_node", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.736737", "end": "2026-02-23 09:44:55.589477", "msg": "", "rc": 0, "start": "2026-02-23 09:44:54.852740", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":14,\"available\":true,\"active_name\":\"np0005626459.pmtxxl\",\"num_standby\":5}", "stdout_lines": ["", "{\"epoch\":14,\"available\":true,\"active_name\":\"np0005626459.pmtxxl\",\"num_standby\":5}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005626459.pmtxxl", "available": true, "epoch": 14, "num_standby": 5}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005626459.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.826451", "end": "2026-02-23 09:44:57.247763", "msg": "", "rc": 0, "start": "2026-02-23 09:44:56.421312", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:44:57.365590", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:45:07.377280", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005626459.localdomain: jid=j147926459081.497448 changed: [np0005626459.localdomain] => {"ansible_job_id": "j147926459081.497448", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.692352", "end": "2026-02-23 09:45:09.020129", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j147926459081.497448", "start": "2026-02-23 09:45:08.327777", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005626459.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626459", "-f", "json"], "delta": "0:00:00.652926", "end": "2026-02-23 09:45:11.694063", "msg": "", "rc": 0, "start": "2026-02-23 09:45:11.041137", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"25f9be9fcf82\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.70%\", \"created\": \"2026-02-23T07:36:04.379001Z\", \"daemon_id\": \"np0005626459\", \"daemon_name\": \"mon.np0005626459\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626459.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:59.882154Z\", \"memory_request\": 2147483648, \"memory_usage\": 144074342, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T07:36:07.316021Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"25f9be9fcf82\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.70%\", \"created\": \"2026-02-23T07:36:04.379001Z\", \"daemon_id\": \"np0005626459\", \"daemon_name\": \"mon.np0005626459\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626459.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:59.882154Z\", \"memory_request\": 2147483648, \"memory_usage\": 144074342, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T07:36:07.316021Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005626459", "--force"], "delta": "0:00:06.829957", "end": "2026-02-23 09:45:19.107801", "msg": "", "rc": 0, "start": "2026-02-23 09:45:12.277844", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005626459 from host 'np0005626459.localdomain'", "stdout_lines": ["Removed mon.np0005626459 from host 'np0005626459.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005626459.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626459.localdomain", "mon"], "delta": "0:00:00.726550", "end": "2026-02-23 09:45:20.553820", "item": ["np0005626459.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:45:19.827270", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005626459.localdomain", "stdout_lines": ["Removed label mon from host np0005626459.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626459.localdomain", "mgr"], "delta": "0:00:00.695911", "end": "2026-02-23 09:45:21.804583", "item": ["np0005626459.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-23 09:45:21.108672", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005626459.localdomain", "stdout_lines": ["Removed label mgr from host np0005626459.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626459.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626459.localdomain", "_admin"], "delta": "0:00:00.848633", "end": "2026-02-23 09:45:23.202369", "item": ["np0005626459.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:45:22.353736", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005626459.localdomain", "stdout_lines": ["Removed label _admin from host np0005626459.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:45:23.319647", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:45:33.331326", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005626459.localdomain"], "delta": "0:00:00.746513", "end": "2026-02-23 09:45:34.583961", "msg": "", "rc": 0, "start": "2026-02-23 09:45:33.837448", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005626459.localdomain'\ntype id \n-------------------- ---------------\nmgr np0005626459.pmtxxl\ncrash np0005626459 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005626459.localdomain'", "type id ", "-------------------- ---------------", "mgr np0005626459.pmtxxl", "crash np0005626459 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005626459.localdomain", "-f", "json"], "delta": "0:00:00.707403", "end": "2026-02-23 09:45:35.946479", "msg": "", "rc": 0, "start": "2026-02-23 09:45:35.239076", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005626459.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005626459.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005626459.localdomain", "--force"], "delta": "0:00:00.833483", "end": "2026-02-23 09:45:37.618785", "msg": "", "rc": 0, "start": "2026-02-23 09:45:36.785302", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005626459.localdomain'", "stdout_lines": ["Removed host 'np0005626459.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005626459.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005626459.localdomain] => {"backup": "/etc/os-net-config/tripleo_config.yaml.499841.2026-02-23@09:45:38~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.103/24", "dev", "vlan21"], "delta": "0:00:00.005452", "end": "2026-02-23 09:45:39.182237", "msg": "", "rc": 0, "start": "2026-02-23 09:45:39.176785", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005626459.localdomain -> np0005626463.localdomain(192.168.122.106)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.292527.2026-02-23@09:45:40~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005626459.localdomain -> np0005626463.localdomain(192.168.122.106)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.103/24", "dev", "vlan21"], "delta": "0:00:00.005216", "end": "2026-02-23 09:45:41.217795", "msg": "", "rc": 0, "start": "2026-02-23 09:45:41.212579", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005626459.localdomain -> np0005626463.localdomain(192.168.122.106)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.103"], "delta": "0:00:02.040156", "end": "2026-02-23 09:45:43.976403", "msg": "", "rc": 0, "start": "2026-02-23 09:45:41.936247", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.103 (172.18.0.103) 56(84) bytes of data.\n64 bytes from 172.18.0.103: icmp_seq=1 ttl=64 time=0.059 ms\n64 bytes from 172.18.0.103: icmp_seq=2 ttl=64 time=0.074 ms\n64 bytes from 172.18.0.103: icmp_seq=3 ttl=64 time=0.068 ms\n\n--- 172.18.0.103 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2034ms\nrtt min/avg/max/mdev = 0.059/0.067/0.074/0.006 ms", "stdout_lines": ["PING 172.18.0.103 (172.18.0.103) 56(84) bytes of data.", "64 bytes from 172.18.0.103: icmp_seq=1 ttl=64 time=0.059 ms", "64 bytes from 172.18.0.103: icmp_seq=2 ttl=64 time=0.074 ms", "64 bytes from 172.18.0.103: icmp_seq=3 ttl=64 time=0.068 ms", "", "--- 172.18.0.103 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2034ms", "rtt min/avg/max/mdev = 0.059/0.067/0.074/0.006 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.680948", "end": "2026-02-23 09:45:45.290453", "rc": 0, "start": "2026-02-23 09:45:44.609505", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626463", "-f", "json"], "delta": "0:00:00.694800", "end": "2026-02-23 09:45:46.580823", "msg": "", "rc": 0, "start": "2026-02-23 09:45:45.886023", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"081a8332e685\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.82%\", \"created\": \"2026-02-23T09:44:39.327826Z\", \"daemon_id\": \"np0005626463\", \"daemon_name\": \"mon.np0005626463\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:45:20.786425Z daemon:mon.np0005626463 [INFO] \\\"Reconfigured mon.np0005626463 on host 'np0005626463.localdomain'\\\"\"], \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:59.885563Z\", \"memory_request\": 2147483648, \"memory_usage\": 42257612, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:39.216952Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"081a8332e685\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.82%\", \"created\": \"2026-02-23T09:44:39.327826Z\", \"daemon_id\": \"np0005626463\", \"daemon_name\": \"mon.np0005626463\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:45:20.786425Z daemon:mon.np0005626463 [INFO] \\\"Reconfigured mon.np0005626463 on host 'np0005626463.localdomain'\\\"\"], \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:44:59.885563Z\", \"memory_request\": 2147483648, \"memory_usage\": 42257612, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:39.216952Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005626463", "--force"], "delta": "0:00:09.582035", "end": "2026-02-23 09:45:56.785153", "msg": "", "rc": 0, "start": "2026-02-23 09:45:47.203118", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005626463 from host 'np0005626463.localdomain'", "stdout_lines": ["Removed mon.np0005626463 from host 'np0005626463.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:45:56.900437", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:46:06.910865", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005626463.localdomain] *********** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005626463.localdomain] *********** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005626463.localdomain:172.18.0.103"], "delta": "0:00:03.570075", "end": "2026-02-23 09:46:11.250838", "msg": "", "rc": 0, "start": "2026-02-23 09:46:07.680763", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005626463 on host 'np0005626463.localdomain'", "stdout_lines": ["Deployed mon.np0005626463 on host 'np0005626463.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:46:11.374626", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:46:21.384729", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.735302", "end": "2026-02-23 09:46:22.586734", "msg": "", "rc": 0, "start": "2026-02-23 09:46:21.851432", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_FAILED_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 failed cephadm daemon(s)\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":42,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005626461\",\"np0005626460\",\"np0005626466\",\"np0005626465\",\"np0005626463\"],\"quorum_age\":5,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":81,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109572084,\"bytes_used\":611549184,\"bytes_avail\":44460441600,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":75,\"modified\":\"2026-02-23T09:46:05.154359+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_FAILED_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 failed cephadm daemon(s)\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":42,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005626461\",\"np0005626460\",\"np0005626466\",\"np0005626465\",\"np0005626463\"],\"quorum_age\":5,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":81,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109572084,\"bytes_used\":611549184,\"bytes_avail\":44460441600,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":75,\"modified\":\"2026-02-23T09:46:05.154359+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:00.972497", "end": "2026-02-23 09:46:24.152219", "msg": "", "rc": 0, "start": "2026-02-23 09:46:23.179722", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005626463.localdomain'\nScheduled to reconfig osd.5 on host 'np0005626463.localdomain'\nScheduled to reconfig osd.0 on host 'np0005626465.localdomain'\nScheduled to reconfig osd.3 on host 'np0005626465.localdomain'\nScheduled to reconfig osd.1 on host 'np0005626466.localdomain'\nScheduled to reconfig osd.4 on host 'np0005626466.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005626463.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005626463.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005626465.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005626465.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005626466.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005626466.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005626459.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.816348", "end": "2026-02-23 09:46:25.709300", "msg": "", "rc": 0, "start": "2026-02-23 09:46:24.892952", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:46:25.833777", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:46:35.845830", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005626459.localdomain: jid=j51712801772.501581 changed: [np0005626459.localdomain] => {"ansible_job_id": "j51712801772.501581", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.721564", "end": "2026-02-23 09:46:37.262931", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j51712801772.501581", "start": "2026-02-23 09:46:36.541367", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.689463", "end": "2026-02-23 09:46:39.569151", "rc": 0, "start": "2026-02-23 09:46:38.879688", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626463", "-f", "json"], "delta": "0:00:00.694754", "end": "2026-02-23 09:46:40.867689", "msg": "", "rc": 0, "start": "2026-02-23 09:46:40.172935", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"a517a74ed21c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"4.70%\", \"created\": \"2026-02-23T09:46:11.010605Z\", \"daemon_id\": \"np0005626463\", \"daemon_name\": \"mon.np0005626463\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:46:27.923486Z\", \"memory_request\": 2147483648, \"memory_usage\": 41859153, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:46:10.867790Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"a517a74ed21c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"4.70%\", \"created\": \"2026-02-23T09:46:11.010605Z\", \"daemon_id\": \"np0005626463\", \"daemon_name\": \"mon.np0005626463\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626463.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:46:27.923486Z\", \"memory_request\": 2147483648, \"memory_usage\": 41859153, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:46:10.867790Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005626459.localdomain] => { "msg": "Migrate mon: np0005626460.localdomain to node: np0005626465.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005626459.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.104"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.728462", "end": "2026-02-23 09:46:42.408046", "msg": "", "rc": 0, "start": "2026-02-23 09:46:41.679584", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":42,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005626461\",\"np0005626460\",\"np0005626466\",\"np0005626465\",\"np0005626463\"],\"quorum_age\":25,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":82,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":611631104,\"bytes_avail\":44460359680,\"bytes_total\":45071990784,\"read_bytes_sec\":19631,\"write_bytes_sec\":0,\"read_op_per_sec\":8,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":75,\"modified\":\"2026-02-23T09:46:05.154359+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":42,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005626461\",\"np0005626460\",\"np0005626466\",\"np0005626465\",\"np0005626463\"],\"quorum_age\":25,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":82,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":611631104,\"bytes_avail\":44460359680,\"bytes_total\":45071990784,\"read_bytes_sec\":19631,\"write_bytes_sec\":0,\"read_op_per_sec\":8,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":75,\"modified\":\"2026-02-23T09:46:05.154359+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005626459.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => {"changed": false, "examined": 2, "files": [{"atime": 1771839989.7183564, "ctime": 1771839990.1483696, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233126823, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771839989.9193625, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 367, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771839990.994395, "ctime": 1771839991.3654063, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233126824, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771839991.1854007, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 2, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 367, 'inode': 1233126823, 'dev': 64516, 'nlink': 1, 'atime': 1771839989.7183564, 'mtime': 1771839989.9193625, 'ctime': 1771839990.1483696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "c1d20d1c017aa542ef7dd79a8bf35eb88bf80ce3", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1771839989.7183564, "ctime": 1771839990.1483696, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233126823, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771839989.9193625, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 367, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "b3a569437307ba895c89b23c3c0a76c3", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 367, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 1233126824, 'dev': 64516, 'nlink': 1, 'atime': 1771839990.994395, 'mtime': 1771839991.1854007, 'ctime': 1771839991.3654063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "fd04ead011675324e6ecfc5b53f1d9fb294b7881", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1771839990.994395, "ctime": 1771839991.3654063, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233126824, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771839991.1854007, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "31f18ddffd881a7d7131636a0e4ce49c", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.764422", "end": "2026-02-23 09:46:47.105797", "msg": "", "rc": 0, "start": "2026-02-23 09:46:46.341375", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":23,\"available\":true,\"active_name\":\"np0005626460.fyrady\",\"num_standby\":5}", "stdout_lines": ["", "{\"epoch\":23,\"available\":true,\"active_name\":\"np0005626460.fyrady\",\"num_standby\":5}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005626460.fyrady", "available": true, "epoch": 23, "num_standby": 5}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005626459.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:01.252476", "end": "2026-02-23 09:46:49.142961", "msg": "", "rc": 0, "start": "2026-02-23 09:46:47.890485", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:46:49.271431", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:46:59.279707", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005626459.localdomain: jid=j916260136284.502716 changed: [np0005626459.localdomain] => {"ansible_job_id": "j916260136284.502716", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.661787", "end": "2026-02-23 09:47:00.643028", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j916260136284.502716", "start": "2026-02-23 09:46:59.981241", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005626459.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626460", "-f", "json"], "delta": "0:00:00.694594", "end": "2026-02-23 09:47:03.190150", "msg": "", "rc": 0, "start": "2026-02-23 09:47:02.495556", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"0b532631eba1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.16%\", \"created\": \"2026-02-23T07:38:22.231188Z\", \"daemon_id\": \"np0005626460\", \"daemon_name\": \"mon.np0005626460\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:46:57.658845Z daemon:mon.np0005626460 [INFO] \\\"Reconfigured mon.np0005626460 on host 'np0005626460.localdomain'\\\"\"], \"hostname\": \"np0005626460.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:46:50.853882Z\", \"memory_request\": 2147483648, \"memory_usage\": 134322585, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T07:38:22.088779Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"0b532631eba1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.16%\", \"created\": \"2026-02-23T07:38:22.231188Z\", \"daemon_id\": \"np0005626460\", \"daemon_name\": \"mon.np0005626460\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:46:57.658845Z daemon:mon.np0005626460 [INFO] \\\"Reconfigured mon.np0005626460 on host 'np0005626460.localdomain'\\\"\"], \"hostname\": \"np0005626460.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:46:50.853882Z\", \"memory_request\": 2147483648, \"memory_usage\": 134322585, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T07:38:22.088779Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005626460", "--force"], "delta": "0:00:02.414643", "end": "2026-02-23 09:47:06.337612", "msg": "", "rc": 0, "start": "2026-02-23 09:47:03.922969", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005626460 from host 'np0005626460.localdomain'", "stdout_lines": ["Removed mon.np0005626460 from host 'np0005626460.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005626459.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626460.localdomain", "mon"], "delta": "0:00:00.722507", "end": "2026-02-23 09:47:07.659933", "item": ["np0005626460.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:47:06.937426", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005626460.localdomain", "stdout_lines": ["Removed label mon from host np0005626460.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626460.localdomain", "mgr"], "delta": "0:00:00.724764", "end": "2026-02-23 09:47:09.511979", "item": ["np0005626460.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-23 09:47:08.787215", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005626460.localdomain", "stdout_lines": ["Removed label mgr from host np0005626460.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626460.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626460.localdomain", "_admin"], "delta": "0:00:00.663372", "end": "2026-02-23 09:47:10.726429", "item": ["np0005626460.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:47:10.063057", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005626460.localdomain", "stdout_lines": ["Removed label _admin from host np0005626460.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:47:10.848955", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:47:20.860553", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005626460.localdomain"], "delta": "0:00:00.775987", "end": "2026-02-23 09:47:22.310445", "msg": "", "rc": 0, "start": "2026-02-23 09:47:21.534458", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005626460.localdomain'\ntype id \n-------------------- ---------------\ncrash np0005626460 \nmgr np0005626460.fyrady", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005626460.localdomain'", "type id ", "-------------------- ---------------", "crash np0005626460 ", "mgr np0005626460.fyrady"]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005626460.localdomain", "-f", "json"], "delta": "0:00:00.649658", "end": "2026-02-23 09:47:23.614249", "msg": "", "rc": 0, "start": "2026-02-23 09:47:22.964591", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.104\", \"hostname\": \"np0005626460.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.104\", \"hostname\": \"np0005626460.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005626460.localdomain", "--force"], "delta": "0:00:00.699017", "end": "2026-02-23 09:47:24.881247", "msg": "", "rc": 0, "start": "2026-02-23 09:47:24.182230", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005626460.localdomain'", "stdout_lines": ["Removed host 'np0005626460.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005626459.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.483950.2026-02-23@09:47:25~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005626459.localdomain -> np0005626460.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.104/24", "dev", "vlan21"], "delta": "0:00:00.005132", "end": "2026-02-23 09:47:26.450159", "msg": "", "rc": 0, "start": "2026-02-23 09:47:26.445027", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005626459.localdomain -> np0005626465.localdomain(192.168.122.107)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.295416.2026-02-23@09:47:27~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005626459.localdomain -> np0005626465.localdomain(192.168.122.107)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.104/24", "dev", "vlan21"], "delta": "0:00:00.004523", "end": "2026-02-23 09:47:28.420675", "msg": "", "rc": 0, "start": "2026-02-23 09:47:28.416152", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005626459.localdomain -> np0005626465.localdomain(192.168.122.107)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.104"], "delta": "0:00:02.075678", "end": "2026-02-23 09:47:31.191585", "msg": "", "rc": 0, "start": "2026-02-23 09:47:29.115907", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.104 (172.18.0.104) 56(84) bytes of data.\n64 bytes from 172.18.0.104: icmp_seq=1 ttl=64 time=0.057 ms\n64 bytes from 172.18.0.104: icmp_seq=2 ttl=64 time=0.041 ms\n64 bytes from 172.18.0.104: icmp_seq=3 ttl=64 time=0.463 ms\n\n--- 172.18.0.104 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2069ms\nrtt min/avg/max/mdev = 0.041/0.187/0.463/0.195 ms", "stdout_lines": ["PING 172.18.0.104 (172.18.0.104) 56(84) bytes of data.", "64 bytes from 172.18.0.104: icmp_seq=1 ttl=64 time=0.057 ms", "64 bytes from 172.18.0.104: icmp_seq=2 ttl=64 time=0.041 ms", "64 bytes from 172.18.0.104: icmp_seq=3 ttl=64 time=0.463 ms", "", "--- 172.18.0.104 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2069ms", "rtt min/avg/max/mdev = 0.041/0.187/0.463/0.195 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.684490", "end": "2026-02-23 09:47:32.481507", "rc": 0, "start": "2026-02-23 09:47:31.797017", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626465", "-f", "json"], "delta": "0:00:00.673883", "end": "2026-02-23 09:47:33.731730", "msg": "", "rc": 0, "start": "2026-02-23 09:47:33.057847", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"287dcf2f52ac\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.56%\", \"created\": \"2026-02-23T09:44:33.999188Z\", \"daemon_id\": \"np0005626465\", \"daemon_name\": \"mon.np0005626465\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:47:24.504737Z daemon:mon.np0005626465 [INFO] \\\"Reconfigured mon.np0005626465 on host 'np0005626465.localdomain'\\\"\"], \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:46:51.007010Z\", \"memory_request\": 2147483648, \"memory_usage\": 65798144, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:33.892574Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"287dcf2f52ac\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.56%\", \"created\": \"2026-02-23T09:44:33.999188Z\", \"daemon_id\": \"np0005626465\", \"daemon_name\": \"mon.np0005626465\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:47:24.504737Z daemon:mon.np0005626465 [INFO] \\\"Reconfigured mon.np0005626465 on host 'np0005626465.localdomain'\\\"\"], \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:46:51.007010Z\", \"memory_request\": 2147483648, \"memory_usage\": 65798144, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:33.892574Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005626465", "--force"], "delta": "0:00:02.203676", "end": "2026-02-23 09:47:36.494986", "msg": "", "rc": 0, "start": "2026-02-23 09:47:34.291310", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005626465 from host 'np0005626465.localdomain'", "stdout_lines": ["Removed mon.np0005626465 from host 'np0005626465.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:47:36.602980", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:47:46.616256", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005626465.localdomain] *********** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005626465.localdomain] *********** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005626465.localdomain:172.18.0.104"], "delta": "0:00:03.402913", "end": "2026-02-23 09:47:50.520999", "msg": "", "rc": 0, "start": "2026-02-23 09:47:47.118086", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005626465 on host 'np0005626465.localdomain'", "stdout_lines": ["Deployed mon.np0005626465 on host 'np0005626465.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:47:50.650893", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:48:00.665420", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.748722", "end": "2026-02-23 09:48:01.879261", "msg": "", "rc": 0, "start": "2026-02-23 09:48:01.130539", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":50,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626461\",\"np0005626466\",\"np0005626463\"],\"quorum_age\":21,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":83,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109574332,\"bytes_used\":611696640,\"bytes_avail\":44460294144,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-23T09:47:55.049799+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":50,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626461\",\"np0005626466\",\"np0005626463\"],\"quorum_age\":21,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":83,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109574332,\"bytes_used\":611696640,\"bytes_avail\":44460294144,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-23T09:47:55.049799+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:00.947959", "end": "2026-02-23 09:48:03.429885", "msg": "", "rc": 0, "start": "2026-02-23 09:48:02.481926", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005626463.localdomain'\nScheduled to reconfig osd.5 on host 'np0005626463.localdomain'\nScheduled to reconfig osd.0 on host 'np0005626465.localdomain'\nScheduled to reconfig osd.3 on host 'np0005626465.localdomain'\nScheduled to reconfig osd.1 on host 'np0005626466.localdomain'\nScheduled to reconfig osd.4 on host 'np0005626466.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005626463.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005626463.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005626465.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005626465.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005626466.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005626466.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005626459.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:04.182379", "end": "2026-02-23 09:48:08.324225", "msg": "", "rc": 0, "start": "2026-02-23 09:48:04.141846", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:48:08.435036", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:48:18.447112", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005626459.localdomain: jid=j579070715344.505578 changed: [np0005626459.localdomain] => {"ansible_job_id": "j579070715344.505578", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.685123", "end": "2026-02-23 09:48:19.839329", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j579070715344.505578", "start": "2026-02-23 09:48:19.154206", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.643288", "end": "2026-02-23 09:48:22.161178", "rc": 0, "start": "2026-02-23 09:48:21.517890", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626465", "-f", "json"], "delta": "0:00:00.662348", "end": "2026-02-23 09:48:23.444202", "msg": "", "rc": 0, "start": "2026-02-23 09:48:22.781854", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"352d16d9927e\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.62%\", \"created\": \"2026-02-23T09:47:50.222110Z\", \"daemon_id\": \"np0005626465\", \"daemon_name\": \"mon.np0005626465\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:48:10.181210Z\", \"memory_request\": 2147483648, \"memory_usage\": 59223572, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:47:50.128033Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"352d16d9927e\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.62%\", \"created\": \"2026-02-23T09:47:50.222110Z\", \"daemon_id\": \"np0005626465\", \"daemon_name\": \"mon.np0005626465\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626465.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:48:10.181210Z\", \"memory_request\": 2147483648, \"memory_usage\": 59223572, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:47:50.128033Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005626459.localdomain] => { "msg": "Migrate mon: np0005626461.localdomain to node: np0005626466.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005626459.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.105"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.803229", "end": "2026-02-23 09:48:25.068191", "msg": "", "rc": 0, "start": "2026-02-23 09:48:24.264962", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":50,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626461\",\"np0005626466\",\"np0005626463\"],\"quorum_age\":45,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":611782656,\"bytes_avail\":44460208128,\"bytes_total\":45071990784,\"read_bytes_sec\":17572,\"write_bytes_sec\":0,\"read_op_per_sec\":7,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-23T09:47:55.049799+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":50,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626461\",\"np0005626466\",\"np0005626463\"],\"quorum_age\":45,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":611782656,\"bytes_avail\":44460208128,\"bytes_total\":45071990784,\"read_bytes_sec\":17572,\"write_bytes_sec\":0,\"read_op_per_sec\":7,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":79,\"modified\":\"2026-02-23T09:47:55.049799+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005626459.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => {"changed": false, "examined": 2, "files": [{"atime": 1771840092.0034144, "ctime": 1771840092.4354277, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233126830, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771840092.2124207, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771840093.2694538, "ctime": 1771840093.690467, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233368618, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771840093.46846, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 2, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 271, 'inode': 1233126830, 'dev': 64516, 'nlink': 1, 'atime': 1771840092.0034144, 'mtime': 1771840092.2124207, 'ctime': 1771840092.4354277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "8e5d3611d65a9d02a6b4ceea8e936ecb4f32eaaa", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1771840092.0034144, "ctime": 1771840092.4354277, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233126830, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771840092.2124207, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "4e0cda835b7f1165da8459bb7acd7033", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 271, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 1233368618, 'dev': 64516, 'nlink': 1, 'atime': 1771840093.2694538, 'mtime': 1771840093.46846, 'ctime': 1771840093.690467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "fd04ead011675324e6ecfc5b53f1d9fb294b7881", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1771840093.2694538, "ctime": 1771840093.690467, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1233368618, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1771840093.46846, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "31f18ddffd881a7d7131636a0e4ce49c", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.690569", "end": "2026-02-23 09:48:29.695732", "msg": "", "rc": 0, "start": "2026-02-23 09:48:29.005163", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":32,\"available\":true,\"active_name\":\"np0005626463.wtksup\",\"num_standby\":3}", "stdout_lines": ["", "{\"epoch\":32,\"available\":true,\"active_name\":\"np0005626463.wtksup\",\"num_standby\":3}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005626463.wtksup", "available": true, "epoch": 32, "num_standby": 3}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "mgr.active_name | regex_search(cur_mon | split('.') | first) or mgr.active_name | regex_search(target_node | split('.') | first)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005626459.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626461", "-f", "json"], "delta": "0:00:03.818750", "end": "2026-02-23 09:48:34.317830", "msg": "", "rc": 0, "start": "2026-02-23 09:48:30.499080", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"aa99e4e25a30\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.16%\", \"created\": \"2026-02-23T07:38:19.779605Z\", \"daemon_id\": \"np0005626461\", \"daemon_name\": \"mon.np0005626461\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:48:23.447325Z daemon:mon.np0005626461 [INFO] \\\"Reconfigured mon.np0005626461 on host 'np0005626461.localdomain'\\\"\"], \"hostname\": \"np0005626461.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:48:10.050351Z\", \"memory_request\": 2147483648, \"memory_usage\": 154455244, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T07:38:19.675914Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"aa99e4e25a30\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.16%\", \"created\": \"2026-02-23T07:38:19.779605Z\", \"daemon_id\": \"np0005626461\", \"daemon_name\": \"mon.np0005626461\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:48:23.447325Z daemon:mon.np0005626461 [INFO] \\\"Reconfigured mon.np0005626461 on host 'np0005626461.localdomain'\\\"\"], \"hostname\": \"np0005626461.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:48:10.050351Z\", \"memory_request\": 2147483648, \"memory_usage\": 154455244, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T07:38:19.675914Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005626461", "--force"], "delta": "0:00:02.483145", "end": "2026-02-23 09:48:37.565273", "msg": "", "rc": 0, "start": "2026-02-23 09:48:35.082128", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005626461 from host 'np0005626461.localdomain'", "stdout_lines": ["Removed mon.np0005626461 from host 'np0005626461.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005626459.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626461.localdomain", "mon"], "delta": "0:00:00.732019", "end": "2026-02-23 09:48:38.952851", "item": ["np0005626461.localdomain", "mon"], "msg": "", "rc": 0, "start": "2026-02-23 09:48:38.220832", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005626461.localdomain", "stdout_lines": ["Removed label mon from host np0005626461.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626461.localdomain", "mgr"], "delta": "0:00:05.199396", "end": "2026-02-23 09:48:44.674989", "item": ["np0005626461.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2026-02-23 09:48:39.475593", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005626461.localdomain", "stdout_lines": ["Removed label mgr from host np0005626461.localdomain"]} changed: [np0005626459.localdomain] => (item=['np0005626461.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005626461.localdomain", "_admin"], "delta": "0:00:00.792997", "end": "2026-02-23 09:48:45.980072", "item": ["np0005626461.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2026-02-23 09:48:45.187075", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005626461.localdomain", "stdout_lines": ["Removed label _admin from host np0005626461.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:48:46.091445", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:48:56.104035", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005626461.localdomain"], "delta": "0:00:00.766811", "end": "2026-02-23 09:48:57.618632", "msg": "", "rc": 0, "start": "2026-02-23 09:48:56.851821", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005626461.localdomain'\ntype id \n-------------------- ---------------\nmgr np0005626461.lrfquh\ncrash np0005626461 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005626461.localdomain'", "type id ", "-------------------- ---------------", "mgr np0005626461.lrfquh", "crash np0005626461 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005626461.localdomain", "-f", "json"], "delta": "0:00:00.692050", "end": "2026-02-23 09:48:58.948463", "msg": "", "rc": 0, "start": "2026-02-23 09:48:58.256413", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.105\", \"hostname\": \"np0005626461.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.105\", \"hostname\": \"np0005626461.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005626461.localdomain", "--force"], "delta": "0:00:00.686894", "end": "2026-02-23 09:49:00.230683", "msg": "", "rc": 0, "start": "2026-02-23 09:48:59.543789", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005626461.localdomain'", "stdout_lines": ["Removed host 'np0005626461.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005626459.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.485988.2026-02-23@09:49:01~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005626459.localdomain -> np0005626461.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.105/24", "dev", "vlan21"], "delta": "0:00:00.005659", "end": "2026-02-23 09:49:02.014474", "msg": "", "rc": 0, "start": "2026-02-23 09:49:02.008815", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005626459.localdomain -> np0005626466.localdomain(192.168.122.108)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.298930.2026-02-23@09:49:03~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005626459.localdomain -> np0005626466.localdomain(192.168.122.108)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.105/24", "dev", "vlan21"], "delta": "0:00:00.004210", "end": "2026-02-23 09:49:03.990626", "msg": "", "rc": 0, "start": "2026-02-23 09:49:03.986416", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005626459.localdomain -> np0005626466.localdomain(192.168.122.108)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.105"], "delta": "0:00:02.066304", "end": "2026-02-23 09:49:06.759602", "msg": "", "rc": 0, "start": "2026-02-23 09:49:04.693298", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.105 (172.18.0.105) 56(84) bytes of data.\n64 bytes from 172.18.0.105: icmp_seq=1 ttl=64 time=0.076 ms\n64 bytes from 172.18.0.105: icmp_seq=2 ttl=64 time=0.055 ms\n64 bytes from 172.18.0.105: icmp_seq=3 ttl=64 time=0.074 ms\n\n--- 172.18.0.105 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2058ms\nrtt min/avg/max/mdev = 0.055/0.068/0.076/0.009 ms", "stdout_lines": ["PING 172.18.0.105 (172.18.0.105) 56(84) bytes of data.", "64 bytes from 172.18.0.105: icmp_seq=1 ttl=64 time=0.076 ms", "64 bytes from 172.18.0.105: icmp_seq=2 ttl=64 time=0.055 ms", "64 bytes from 172.18.0.105: icmp_seq=3 ttl=64 time=0.074 ms", "", "--- 172.18.0.105 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2058ms", "rtt min/avg/max/mdev = 0.055/0.068/0.076/0.009 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.934785", "end": "2026-02-23 09:49:08.405045", "rc": 0, "start": "2026-02-23 09:49:07.470260", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626466", "-f", "json"], "delta": "0:00:00.726238", "end": "2026-02-23 09:49:09.834016", "msg": "", "rc": 0, "start": "2026-02-23 09:49:09.107778", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"c4c7b411729c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.38%\", \"created\": \"2026-02-23T09:44:31.421078Z\", \"daemon_id\": \"np0005626466\", \"daemon_name\": \"mon.np0005626466\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:49:03.481353Z daemon:mon.np0005626466 [INFO] \\\"Reconfigured mon.np0005626466 on host 'np0005626466.localdomain'\\\"\"], \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:48:09.960207Z\", \"memory_request\": 2147483648, \"memory_usage\": 85301657, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:31.327586Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"c4c7b411729c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.38%\", \"created\": \"2026-02-23T09:44:31.421078Z\", \"daemon_id\": \"np0005626466\", \"daemon_name\": \"mon.np0005626466\", \"daemon_type\": \"mon\", \"events\": [\"2026-02-23T09:49:03.481353Z daemon:mon.np0005626466 [INFO] \\\"Reconfigured mon.np0005626466 on host 'np0005626466.localdomain'\\\"\"], \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:48:09.960207Z\", \"memory_request\": 2147483648, \"memory_usage\": 85301657, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:44:31.327586Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005626466", "--force"], "delta": "0:00:02.372686", "end": "2026-02-23 09:49:12.796323", "msg": "", "rc": 0, "start": "2026-02-23 09:49:10.423637", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005626466 from host 'np0005626466.localdomain'", "stdout_lines": ["Removed mon.np0005626466 from host 'np0005626466.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:49:12.924970", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:49:22.937150", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005626466.localdomain] *********** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005626466.localdomain] *********** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005626466.localdomain:172.18.0.105"], "delta": "0:00:03.322845", "end": "2026-02-23 09:49:26.740866", "msg": "", "rc": 0, "start": "2026-02-23 09:49:23.418021", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005626466 on host 'np0005626466.localdomain'", "stdout_lines": ["Deployed mon.np0005626466 on host 'np0005626466.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:49:26.856614", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:49:36.869301", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.713027", "end": "2026-02-23 09:49:38.058309", "msg": "", "rc": 0, "start": "2026-02-23 09:49:37.345282", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":62,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626463\",\"np0005626465\",\"np0005626466\"],\"quorum_age\":3,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":611782656,\"bytes_avail\":44460208128,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2026-02-23T09:49:13.911120+0000\",\"services\":{\"mon\":{\"daemons\":{\"summary\":\"\",\"np0005626465\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":62,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626463\",\"np0005626465\",\"np0005626466\"],\"quorum_age\":3,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":611782656,\"bytes_avail\":44460208128,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2026-02-23T09:49:13.911120+0000\",\"services\":{\"mon\":{\"daemons\":{\"summary\":\"\",\"np0005626465\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:01.000490", "end": "2026-02-23 09:49:39.646327", "msg": "", "rc": 0, "start": "2026-02-23 09:49:38.645837", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005626463.localdomain'\nScheduled to reconfig osd.5 on host 'np0005626463.localdomain'\nScheduled to reconfig osd.0 on host 'np0005626465.localdomain'\nScheduled to reconfig osd.3 on host 'np0005626465.localdomain'\nScheduled to reconfig osd.1 on host 'np0005626466.localdomain'\nScheduled to reconfig osd.4 on host 'np0005626466.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005626463.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005626463.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005626465.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005626465.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005626466.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005626466.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005626459.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005626459.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid f1fea371-cb69-578d-a3d0-b5c472a84b46 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.802407", "end": "2026-02-23 09:49:41.098650", "msg": "", "rc": 0, "start": "2026-02-23 09:49:40.296243", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2026-02-23 09:49:41.203137", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2026-02-23 09:49:51.215110", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005626459.localdomain: jid=j818865569971.509113 changed: [np0005626459.localdomain] => {"ansible_job_id": "j818865569971.509113", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.669023", "end": "2026-02-23 09:49:52.623754", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j818865569971.509113", "start": "2026-02-23 09:49:51.954731", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005626459.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:03.747124", "end": "2026-02-23 09:49:58.055208", "rc": 0, "start": "2026-02-23 09:49:54.308084", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005626459.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005626459.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005626466", "-f", "json"], "delta": "0:00:00.718443", "end": "2026-02-23 09:49:59.399109", "msg": "", "rc": 0, "start": "2026-02-23 09:49:58.680666", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"2ceecf5aca3e\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.90%\", \"created\": \"2026-02-23T09:49:26.514192Z\", \"daemon_id\": \"np0005626466\", \"daemon_name\": \"mon.np0005626466\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:49:43.337726Z\", \"memory_request\": 2147483648, \"memory_usage\": 31205621, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:49:26.412230Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"2ceecf5aca3e\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:485411749726179fe5cd880e2cf308261b35150e4b356ddb7100f52e02b2e353\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\"], \"container_image_id\": \"957884a578836762598b9ee0d341c2f25fc7f75b4c9e8a2eedc704f3f8efac7d\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.90%\", \"created\": \"2026-02-23T09:49:26.514192Z\", \"daemon_id\": \"np0005626466\", \"daemon_name\": \"mon.np0005626466\", \"daemon_type\": \"mon\", \"hostname\": \"np0005626466.localdomain\", \"is_active\": false, \"last_refresh\": \"2026-02-23T09:49:43.337726Z\", \"memory_request\": 2147483648, \"memory_usage\": 31205621, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2026-02-23T09:49:26.412230Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-381.el9cp\"}]"]} TASK [ceph_migrate : Sleep before moving to the next mon] ********************** Pausing for 30 seconds ok: [np0005626459.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2026-02-23 09:49:59.577435", "stderr": "", "stdout": "Paused for 30.01 seconds", "stop": "2026-02-23 09:50:29.582759", "user_input": ""} TASK [ceph_migrate : POST - Dump logs] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_load.yaml for np0005626459.localdomain TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005626459.localdomain] => {"changed": false, "examined": 4, "files": [{"atime": 1771839893.8290172, "ctime": 1771839893.6930134, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 349292, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771839844.564712, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 142, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771839893.8340173, "ctime": 1771839893.6930134, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1317108073, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771832205.4769254, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771833472.164338, "ctime": 1771839893.6930134, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1317108074, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771833469.713273, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.openstack.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1771833473.2673671, "ctime": 1771839893.6930134, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1317108075, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1771833470.6812987, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.manila.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 4, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Restore files] ******************************************** changed: [np0005626459.localdomain] => (item=ceph.conf) => {"ansible_loop_var": "item", "changed": true, "checksum": "35652e81851ba988308532419ad65d6d9c0476ad", "dest": "/etc/ceph/ceph.conf", "gid": 0, "group": "root", "item": "ceph.conf", "md5sum": "46449f719b281efc7c936ea24835c325", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 142, "src": "/home/tripleo-admin/ceph_client/ceph.conf", "state": "file", "uid": 0} changed: [np0005626459.localdomain] => (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": true, "checksum": "fd04ead011675324e6ecfc5b53f1d9fb294b7881", "dest": "/etc/ceph/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": "ceph.client.admin.keyring", "md5sum": "31f18ddffd881a7d7131636a0e4ce49c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 151, "src": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005626459.localdomain] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client/logs", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Get Ceph Health] ****************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "-s", "-f", "json"], "delta": "0:00:04.998156", "end": "2026-02-23 09:50:37.151049", "msg": "", "rc": 0, "start": "2026-02-23 09:50:32.152893", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":62,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626463\",\"np0005626465\",\"np0005626466\"],\"quorum_age\":62,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":85,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109576580,\"bytes_used\":611885056,\"bytes_avail\":44460105728,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2026-02-23T09:49:13.911120+0000\",\"services\":{\"mon\":{\"daemons\":{\"summary\":\"\",\"np0005626465\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":62,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005626463\",\"np0005626465\",\"np0005626466\"],\"quorum_age\":62,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":85,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1771832350,\"num_in_osds\":6,\"osd_in_since\":1771832330,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109576580,\"bytes_used\":611885056,\"bytes_avail\":44460105728,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":17,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005626463.qcthuc\",\"status\":\"up:active\",\"gid\":26518}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2026-02-23T09:49:13.911120+0000\",\"services\":{\"mon\":{\"daemons\":{\"summary\":\"\",\"np0005626465\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : Load ceph data] ******************************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"ceph": {"election_epoch": 62, "fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", "fsmap": {"by_rank": [{"filesystem_id": 1, "gid": 26518, "name": "mds.np0005626463.qcthuc", "rank": 0, "status": "up:active"}], "epoch": 17, "id": 1, "in": 1, "max": 1, "up": 1, "up:standby": 2}, "health": {"checks": {"CEPHADM_STRAY_DAEMON": {"muted": false, "severity": "HEALTH_WARN", "summary": {"count": 1, "message": "1 stray daemon(s) not managed by cephadm"}}, "CEPHADM_STRAY_HOST": {"muted": false, "severity": "HEALTH_WARN", "summary": {"count": 1, "message": "1 stray host(s) with 1 daemon(s) not managed by cephadm"}}}, "mutes": [], "status": "HEALTH_WARN"}, "mgrmap": {"available": true, "modules": ["cephadm", "iostat", "nfs", "restful"], "num_standbys": 2, "services": {}}, "monmap": {"epoch": 15, "min_mon_release_name": "reef", "num_mons": 3}, "osdmap": {"epoch": 85, "num_in_osds": 6, "num_osds": 6, "num_remapped_pgs": 0, "num_up_osds": 6, "osd_in_since": 1771832330, "osd_up_since": 1771832350}, "pgmap": {"bytes_avail": 44460105728, "bytes_total": 45071990784, "bytes_used": 611885056, "data_bytes": 109576580, "num_objects": 69, "num_pgs": 177, "num_pools": 7, "pgs_by_state": [{"count": 177, "state_name": "active+clean"}]}, "progress_events": {}, "quorum": [0, 1, 2], "quorum_age": 62, "quorum_names": ["np0005626463", "np0005626465", "np0005626466"], "servicemap": {"epoch": 82, "modified": "2026-02-23T09:49:13.911120+0000", "services": {"mon": {"daemons": {"np0005626465": {"addr": "(unrecognized address family 0)/0", "gid": 0, "metadata": {}, "start_epoch": 0, "start_stamp": "0.000000", "task_status": {}}, "summary": ""}}}}}}, "changed": false} TASK [ceph_migrate : Dump ceph -s output to log file] ************************** changed: [np0005626459.localdomain] => {"changed": true, "checksum": "b51d5ade015e3c8db917b98d2f5c40e9d8e3f640", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_health.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "e9a86aaeaf0563edaca104c3cc56f5c1", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1624, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771840237.3228314-64150-205417066558438/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph Orch ServiceMap] ********************************* changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "ls", "-f", "json"], "delta": "0:00:04.961640", "end": "2026-02-23 09:50:43.414415", "msg": "", "rc": 0, "start": "2026-02-23 09:50:38.452775", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-23T07:36:37.670053Z\", \"last_refresh\": \"2026-02-23T09:49:43.289412Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mds\"}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-23T09:42:42.690754Z\", \"last_refresh\": \"2026-02-23T09:49:43.289894Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mgr\"}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-23T09:44:10.829609Z\", \"last_refresh\": \"2026-02-23T09:49:43.290030Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T09:50:01.491074Z service:mon [INFO] \\\"service was created\\\"\"], \"placement\": {\"label\": \"mon\"}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-23T09:49:57.902137Z\", \"last_refresh\": \"2026-02-23T09:49:43.290166Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-23T07:36:51.489181Z\", \"running\": 0, \"size\": 0}}, {\"placement\": {\"hosts\": [\"np0005626463.localdomain\", \"np0005626465.localdomain\", \"np0005626466.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-23T07:37:19.249273Z\", \"last_refresh\": \"2026-02-23T09:49:43.289590Z\", \"running\": 6, \"size\": 6}}]", "stdout_lines": ["", "[{\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2026-02-23T07:36:37.670053Z\", \"last_refresh\": \"2026-02-23T09:49:43.289412Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mds\"}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2026-02-23T09:42:42.690754Z\", \"last_refresh\": \"2026-02-23T09:49:43.289894Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mgr\"}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2026-02-23T09:44:10.829609Z\", \"last_refresh\": \"2026-02-23T09:49:43.290030Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2026-02-23T09:50:01.491074Z service:mon [INFO] \\\"service was created\\\"\"], \"placement\": {\"label\": \"mon\"}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2026-02-23T09:49:57.902137Z\", \"last_refresh\": \"2026-02-23T09:49:43.290166Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2026-02-23T07:36:51.489181Z\", \"running\": 0, \"size\": 0}}, {\"placement\": {\"hosts\": [\"np0005626463.localdomain\", \"np0005626465.localdomain\", \"np0005626466.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2026-02-23T07:37:19.249273Z\", \"last_refresh\": \"2026-02-23T09:49:43.289590Z\", \"running\": 6, \"size\": 6}}]"]} TASK [ceph_migrate : Load Service Map] ***************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"servicemap": [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-23T07:36:37.670053Z", "last_refresh": "2026-02-23T09:49:43.289412Z", "running": 3, "size": 3}}, {"placement": {"label": "mds"}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-23T09:42:42.690754Z", "last_refresh": "2026-02-23T09:49:43.289894Z", "running": 3, "size": 3}}, {"placement": {"label": "mgr"}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-23T09:44:10.829609Z", "last_refresh": "2026-02-23T09:49:43.290030Z", "running": 3, "size": 3}}, {"events": ["2026-02-23T09:50:01.491074Z service:mon [INFO] \"service was created\""], "placement": {"label": "mon"}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-23T09:49:57.902137Z", "last_refresh": "2026-02-23T09:49:43.290166Z", "running": 3, "size": 3}}, {"placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-23T07:36:51.489181Z", "running": 0, "size": 0}}, {"placement": {"hosts": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-23T07:37:19.249273Z", "last_refresh": "2026-02-23T09:49:43.289590Z", "running": 6, "size": 6}}]}, "changed": false} TASK [ceph_migrate : Print Service Map] **************************************** skipping: [np0005626459.localdomain] => (item={'placement': {'host_pattern': '*'}, 'service_name': 'crash', 'service_type': 'crash', 'status': {'created': '2026-02-23T07:36:37.670053Z', 'last_refresh': '2026-02-23T09:49:43.289412Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-02-23T07:36:37.670053Z", "last_refresh": "2026-02-23T09:49:43.289412Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'placement': {'label': 'mds'}, 'service_id': 'mds', 'service_name': 'mds.mds', 'service_type': 'mds', 'status': {'created': '2026-02-23T09:42:42.690754Z', 'last_refresh': '2026-02-23T09:49:43.289894Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"label": "mds"}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2026-02-23T09:42:42.690754Z", "last_refresh": "2026-02-23T09:49:43.289894Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'placement': {'label': 'mgr'}, 'service_name': 'mgr', 'service_type': 'mgr', 'status': {'created': '2026-02-23T09:44:10.829609Z', 'last_refresh': '2026-02-23T09:49:43.290030Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"label": "mgr"}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-02-23T09:44:10.829609Z", "last_refresh": "2026-02-23T09:49:43.290030Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'events': ['2026-02-23T09:50:01.491074Z service:mon [INFO] "service was created"'], 'placement': {'label': 'mon'}, 'service_name': 'mon', 'service_type': 'mon', 'status': {'created': '2026-02-23T09:49:57.902137Z', 'last_refresh': '2026-02-23T09:49:43.290166Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2026-02-23T09:50:01.491074Z service:mon [INFO] \"service was created\""], "placement": {"label": "mon"}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-02-23T09:49:57.902137Z", "last_refresh": "2026-02-23T09:49:43.290166Z", "running": 3, "size": 3}}} skipping: [np0005626459.localdomain] => (item={'placement': {'host_pattern': '*'}, 'service_name': 'node-proxy', 'service_type': 'node-proxy', 'status': {'created': '2026-02-23T07:36:51.489181Z', 'running': 0, 'size': 0}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2026-02-23T07:36:51.489181Z", "running": 0, "size": 0}}} skipping: [np0005626459.localdomain] => (item={'placement': {'hosts': ['np0005626463.localdomain', 'np0005626465.localdomain', 'np0005626466.localdomain']}, 'service_id': 'default_drive_group', 'service_name': 'osd.default_drive_group', 'service_type': 'osd', 'spec': {'data_devices': {'paths': ['/dev/ceph_vg0/ceph_lv0', '/dev/ceph_vg1/ceph_lv1']}, 'filter_logic': 'AND', 'objectstore': 'bluestore'}, 'status': {'created': '2026-02-23T07:37:19.249273Z', 'last_refresh': '2026-02-23T09:49:43.289590Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"hosts": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-02-23T07:37:19.249273Z", "last_refresh": "2026-02-23T09:49:43.289590Z", "running": 6, "size": 6}}} skipping: [np0005626459.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch ls output to log file] ********************* changed: [np0005626459.localdomain] => {"changed": true, "checksum": "00264a7ea59de3e1d2af983a58964649f305788a", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_orch_ls.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "33217f406faacc3a7fcf10a750fbf47c", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1600, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771840243.6406398-64184-254515174961423/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph config] ****************************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "config", "dump", "-f", "json"], "delta": "0:00:04.814620", "end": "2026-02-23 09:50:49.748026", "msg": "", "rc": 0, "start": "2026-02-23 09:50:44.933406", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_haproxy\",\"value\":\"registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_keepalived\",\"value\":\"registry.redhat.io/rhceph/keepalived-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"3561601228\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626463\",\"location_type\":\"host\",\"location_value\":\"np0005626463\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"3561601228\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626465\",\"location_type\":\"host\",\"location_value\":\"np0005626465\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"3561598361\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626466\",\"location_type\":\"host\",\"location_value\":\"np0005626466\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.np0005626463.qcthuc\",\"name\":\"mds_join_fs\",\"value\":\"cephfs\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]", "stdout_lines": ["", "[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_haproxy\",\"value\":\"registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_keepalived\",\"value\":\"registry.redhat.io/rhceph/keepalived-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"3561601228\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626463\",\"location_type\":\"host\",\"location_value\":\"np0005626463\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"3561601228\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626465\",\"location_type\":\"host\",\"location_value\":\"np0005626465\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"3561598361\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005626466\",\"location_type\":\"host\",\"location_value\":\"np0005626466\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.np0005626463.qcthuc\",\"name\":\"mds_join_fs\",\"value\":\"cephfs\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]"]} TASK [ceph_migrate : Print Ceph config dump] *********************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph config dump output to log file] ***************** changed: [np0005626459.localdomain] => {"changed": true, "checksum": "7ebaa7185ba3e2ee58e75d4a1935b917b527a590", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_config_dump.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "e664f8754e956e94bb36fcdc49fb7f37", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 3044, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771840249.9290433-64211-139873728050126/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph Orch Host Map] *********************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "host", "ls", "-f", "json"], "delta": "0:00:04.819203", "end": "2026-02-23 09:50:56.104255", "msg": "", "rc": 0, "start": "2026-02-23 09:50:51.285052", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3"], "stdout": "\n[{\"addr\": \"192.168.122.106\", \"hostname\": \"np0005626463.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005626465.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005626466.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.106\", \"hostname\": \"np0005626463.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005626465.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005626466.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}]"]} TASK [ceph_migrate : Load nodes] *********************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"nds": [{"addr": "192.168.122.106", "hostname": "np0005626463.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}, {"addr": "192.168.122.107", "hostname": "np0005626465.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}, {"addr": "192.168.122.108", "hostname": "np0005626466.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}]}, "changed": false} TASK [ceph_migrate : Load hostmap List] **************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"hostmap": {"np0005626463.localdomain": ["osd", "mds", "mgr", "mon", "_admin"], "np0005626465.localdomain": ["osd", "mds", "mgr", "mon", "_admin"], "np0005626466.localdomain": ["osd", "mds", "mgr", "mon", "_admin"]}}, "changed": false} TASK [ceph_migrate : Print Host Map] ******************************************* skipping: [np0005626459.localdomain] => (item=np0005626463.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626463.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626465.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626465.localdomain"} skipping: [np0005626459.localdomain] => (item=np0005626466.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005626466.localdomain"} skipping: [np0005626459.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch host ls output to log file] **************** changed: [np0005626459.localdomain] => {"changed": true, "checksum": "03563f091be2fd59fa322d08c28947b87c44987d", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_orch_host_ls.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "8c7400630034663fa02e5817f526c131", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 84, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771840256.338486-64242-99767192593991/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph monmap and load data] **************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "mon", "dump", "-f", "json"], "delta": "0:00:05.066054", "end": "2026-02-23 09:51:02.771227", "msg": "", "rc": 0, "start": "2026-02-23 09:50:57.705173", "stderr": "Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3\ndumped monmap epoch 15", "stderr_lines": ["Inferring fsid f1fea371-cb69-578d-a3d0-b5c472a84b46", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '957884a57883' and tag 'latest' created on 2026-02-09 10:26:08 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:50945286dae5941044aa91c7700ff058ab2cb308d5d1d6d6bb2daf28aa7a0ca3", "dumped monmap epoch 15"], "stdout": "\n{\"epoch\":15,\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"modified\":\"2026-02-23T09:49:26.924061Z\",\"created\":\"2026-02-23T07:36:01.997603Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005626463\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005626465\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005626466\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}", "stdout_lines": ["", "{\"epoch\":15,\"fsid\":\"f1fea371-cb69-578d-a3d0-b5c472a84b46\",\"modified\":\"2026-02-23T09:49:26.924061Z\",\"created\":\"2026-02-23T07:36:01.997603Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005626463\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005626465\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005626466\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}"]} TASK [ceph_migrate : Get Monmap] *********************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"mon_dump": {"created": "2026-02-23T07:36:01.997603Z", "disallowed_leaders: ": "", "election_strategy": 1, "epoch": 15, "features": {"optional": [], "persistent": ["kraken", "luminous", "mimic", "osdmap-prune", "nautilus", "octopus", "pacific", "elector-pinging", "quincy", "reef"]}, "fsid": "f1fea371-cb69-578d-a3d0-b5c472a84b46", "min_mon_release": 18, "min_mon_release_name": "reef", "modified": "2026-02-23T09:49:26.924061Z", "mons": [{"addr": "172.18.0.103:6789/0", "crush_location": "{}", "name": "np0005626463", "priority": 0, "public_addr": "172.18.0.103:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.103:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.103:6789", "nonce": 0, "type": "v1"}]}, "rank": 0, "weight": 0}, {"addr": "172.18.0.104:6789/0", "crush_location": "{}", "name": "np0005626465", "priority": 0, "public_addr": "172.18.0.104:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.104:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.104:6789", "nonce": 0, "type": "v1"}]}, "rank": 1, "weight": 0}, {"addr": "172.18.0.105:6789/0", "crush_location": "{}", "name": "np0005626466", "priority": 0, "public_addr": "172.18.0.105:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.105:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.105:6789", "nonce": 0, "type": "v1"}]}, "rank": 2, "weight": 0}], "quorum": [0, 1, 2], "removed_ranks: ": "", "stretch_mode": false, "tiebreaker_mon": ""}}, "changed": false} TASK [ceph_migrate : Print monmap] ********************************************* skipping: [np0005626459.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph mon dump output to log file] ******************** changed: [np0005626459.localdomain] => {"changed": true, "checksum": "41a0df250e02add6731c08ec694b79e710969fb3", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_mon_dump.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "c01c3b10f89f75775d273c2ac7d60b73", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1425, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771840262.9782078-64271-24735661085756/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Load nodes to decommission] ******************************* ok: [np0005626459.localdomain] => {"ansible_facts": {"decomm_nodes": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "changed": false} TASK [ceph_migrate : Load target nodes] **************************************** ok: [np0005626459.localdomain] => {"ansible_facts": {"target_nodes": ["np0005626463.localdomain", "np0005626465.localdomain", "np0005626466.localdomain"]}, "changed": false} TASK [ceph_migrate : Print target nodes] *************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Print decomm_nodes] *************************************** skipping: [np0005626459.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Configure Swift to use rgw backend] *********************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Flush handlers to ensure mgr restart completes] *********** RUNNING HANDLER [ceph_migrate : restart mgr] *********************************** changed: [np0005626459.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "f1fea371-cb69-578d-a3d0-b5c472a84b46", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.786559", "end": "2026-02-23 09:51:05.134873", "msg": "", "rc": 0, "start": "2026-02-23 09:51:04.348314", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Install cephadm on all compute nodes] ********************* skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "groups['ComputeHCI'] is defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Force fail ceph mgr on first compute node] **************** skipping: [np0005626459.localdomain] => {"changed": false, "false_condition": "groups['ComputeHCI'] is defined", "skip_reason": "Conditional result was False"} PLAY RECAP ********************************************************************* np0005626459.localdomain : ok=239 changed=111 unreachable=0 failed=0 skipped=143 rescued=0 ignored=0