[WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (edpm_node_hostname). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (edpm_node_ip). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (source_galera_members). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (source_mariadb_ip). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (enable_tlse). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (neutron_qe_test). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (tobiko_qe_test). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (neutron_qe_dir). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (prelaunch_barbican_secret). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (os_cloud_name). Using last defined value only. [WARNING]: While constructing a mapping from /home/zuul/src/review.rdoproject.org/rdo- jobs/playbooks/data_plane_adoption/vars.yaml, line 6, column 1, found a duplicate dict key (standalone_ip). Using last defined value only. Using /home/zuul/src/review.rdoproject.org/rdo-jobs/playbooks/data_plane_adoption/ansible.cfg as config file PLAY [Externalize Ceph] ******************************************************** TASK [Gathering Facts] ********************************************************* ok: [np0005548785.localdomain] TASK [ceph_migrate : Check file in the src directory] ************************** [WARNING]: Skipped '/home/tripleo-admin/ceph_client' path due to this access issue: '/home/tripleo-admin/ceph_client' is not a directory ok: [np0005548785.localdomain] => {"changed": false, "examined": 0, "files": [], "matched": 0, "msg": "Not all paths examined, check warnings for details", "skipped_paths": {"/home/tripleo-admin/ceph_client": "'/home/tripleo-admin/ceph_client' is not a directory"}} TASK [ceph_migrate : Restore files] ******************************************** skipping: [np0005548785.localdomain] => (item=ceph.conf) => {"ansible_loop_var": "item", "changed": false, "false_condition": "dir_ceph_files.files | length > 0", "item": "ceph.conf", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": false, "false_condition": "dir_ceph_files.files | length > 0", "item": "ceph.client.admin.keyring", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure backup directory exists] *************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Health] ****************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "-s", "-f", "json"], "delta": "0:00:02.608159", "end": "2025-12-06 10:01:30.112502", "msg": "", "rc": 0, "start": "2025-12-06 10:01:27.504343", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":14,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548785\",\"np0005548787\",\"np0005548786\"],\"quorum_age\":7311,\"monmap\":{\"epoch\":3,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":592916480,\"bytes_avail\":44479074304,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":6,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548787.rhwfmx\",\"status\":\"up:active\",\"gid\":24235}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":68,\"modified\":\"2025-12-06T10:01:22.508218+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":14,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548785\",\"np0005548787\",\"np0005548786\"],\"quorum_age\":7311,\"monmap\":{\"epoch\":3,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":84,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":592916480,\"bytes_avail\":44479074304,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":6,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548787.rhwfmx\",\"status\":\"up:active\",\"gid\":24235}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":2,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":68,\"modified\":\"2025-12-06T10:01:22.508218+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Load ceph data] ******************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph": {"election_epoch": 14, "fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "fsmap": {"by_rank": [{"filesystem_id": 1, "gid": 24235, "name": "mds.np0005548787.rhwfmx", "rank": 0, "status": "up:active"}], "epoch": 6, "id": 1, "in": 1, "max": 1, "up": 1, "up:standby": 2}, "health": {"checks": {}, "mutes": [], "status": "HEALTH_OK"}, "mgrmap": {"available": true, "modules": ["cephadm", "iostat", "nfs", "restful"], "num_standbys": 2, "services": {}}, "monmap": {"epoch": 3, "min_mon_release_name": "reef", "num_mons": 3}, "osdmap": {"epoch": 84, "num_in_osds": 6, "num_osds": 6, "num_remapped_pgs": 0, "num_up_osds": 6, "osd_in_since": 1765007998, "osd_up_since": 1765008020}, "pgmap": {"bytes_avail": 44479074304, "bytes_total": 45071990784, "bytes_used": 592916480, "data_bytes": 109571242, "num_objects": 69, "num_pgs": 177, "num_pools": 7, "pgs_by_state": [{"count": 177, "state_name": "active+clean"}]}, "progress_events": {}, "quorum": [0, 1, 2], "quorum_age": 7311, "quorum_names": ["np0005548785", "np0005548787", "np0005548786"], "servicemap": {"epoch": 68, "modified": "2025-12-06T10:01:22.508218+0000", "services": {}}}}, "changed": false} TASK [ceph_migrate : Dump ceph -s output to log file] ************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Orch ServiceMap] ********************************* changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "ls", "-f", "json"], "delta": "0:00:02.604408", "end": "2025-12-06 10:01:34.120721", "msg": "", "rc": 0, "start": "2025-12-06 10:01:31.516313", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n[{\"events\": [\"2025-12-06T07:59:52.343312Z service:crash [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2025-12-06T07:57:51.164915Z\", \"last_refresh\": \"2025-12-06T09:52:25.266834Z\", \"running\": 6, \"size\": 6}}, {\"events\": [\"2025-12-06T08:18:52.115534Z service:mds.mds [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005548785.localdomain\", \"np0005548786.localdomain\", \"np0005548787.localdomain\"]}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2025-12-06T08:18:43.527187Z\", \"last_refresh\": \"2025-12-06T09:52:25.266956Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T07:59:41.116828Z service:mgr [INFO] \\\"service was created\\\"\", \"2025-12-06T07:58:43.928448Z service:mgr [ERROR] \\\"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005548785.localdomain\", \"np0005548786.localdomain\", \"np0005548787.localdomain\"]}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2025-12-06T07:58:35.639840Z\", \"last_refresh\": \"2025-12-06T09:52:25.266705Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T07:59:32.461581Z service:mon [INFO] \\\"service was created\\\"\", \"2025-12-06T07:58:43.927086Z service:mon [ERROR] \\\"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005548785.localdomain\", \"np0005548786.localdomain\", \"np0005548787.localdomain\"]}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2025-12-06T07:58:35.627741Z\", \"last_refresh\": \"2025-12-06T09:52:25.266541Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T07:58:05.645300Z service:node-proxy [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2025-12-06T07:58:05.620181Z\", \"running\": 0, \"size\": 0}}, {\"events\": [\"2025-12-06T07:58:35.655692Z service:osd.default_drive_group [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005548788.localdomain\", \"np0005548789.localdomain\", \"np0005548790.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2025-12-06T07:58:35.648091Z\", \"last_refresh\": \"2025-12-06T09:55:36.091987Z\", \"running\": 6, \"size\": 6}}]", "stdout_lines": ["", "[{\"events\": [\"2025-12-06T07:59:52.343312Z service:crash [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2025-12-06T07:57:51.164915Z\", \"last_refresh\": \"2025-12-06T09:52:25.266834Z\", \"running\": 6, \"size\": 6}}, {\"events\": [\"2025-12-06T08:18:52.115534Z service:mds.mds [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005548785.localdomain\", \"np0005548786.localdomain\", \"np0005548787.localdomain\"]}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2025-12-06T08:18:43.527187Z\", \"last_refresh\": \"2025-12-06T09:52:25.266956Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T07:59:41.116828Z service:mgr [INFO] \\\"service was created\\\"\", \"2025-12-06T07:58:43.928448Z service:mgr [ERROR] \\\"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005548785.localdomain\", \"np0005548786.localdomain\", \"np0005548787.localdomain\"]}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2025-12-06T07:58:35.639840Z\", \"last_refresh\": \"2025-12-06T09:52:25.266705Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T07:59:32.461581Z service:mon [INFO] \\\"service was created\\\"\", \"2025-12-06T07:58:43.927086Z service:mon [ERROR] \\\"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\\\"\"], \"placement\": {\"hosts\": [\"np0005548785.localdomain\", \"np0005548786.localdomain\", \"np0005548787.localdomain\"]}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2025-12-06T07:58:35.627741Z\", \"last_refresh\": \"2025-12-06T09:52:25.266541Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T07:58:05.645300Z service:node-proxy [INFO] \\\"service was created\\\"\"], \"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2025-12-06T07:58:05.620181Z\", \"running\": 0, \"size\": 0}}, {\"events\": [\"2025-12-06T07:58:35.655692Z service:osd.default_drive_group [INFO] \\\"service was created\\\"\"], \"placement\": {\"hosts\": [\"np0005548788.localdomain\", \"np0005548789.localdomain\", \"np0005548790.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2025-12-06T07:58:35.648091Z\", \"last_refresh\": \"2025-12-06T09:55:36.091987Z\", \"running\": 6, \"size\": 6}}]"]} TASK [ceph_migrate : Load Service Map] ***************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"servicemap": [{"events": ["2025-12-06T07:59:52.343312Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2025-12-06T07:57:51.164915Z", "last_refresh": "2025-12-06T09:52:25.266834Z", "running": 6, "size": 6}}, {"events": ["2025-12-06T08:18:52.115534Z service:mds.mds [INFO] \"service was created\""], "placement": {"hosts": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2025-12-06T08:18:43.527187Z", "last_refresh": "2025-12-06T09:52:25.266956Z", "running": 3, "size": 3}}, {"events": ["2025-12-06T07:59:41.116828Z service:mgr [INFO] \"service was created\"", "2025-12-06T07:58:43.928448Z service:mgr [ERROR] \"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2025-12-06T07:58:35.639840Z", "last_refresh": "2025-12-06T09:52:25.266705Z", "running": 3, "size": 3}}, {"events": ["2025-12-06T07:59:32.461581Z service:mon [INFO] \"service was created\"", "2025-12-06T07:58:43.927086Z service:mon [ERROR] \"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2025-12-06T07:58:35.627741Z", "last_refresh": "2025-12-06T09:52:25.266541Z", "running": 3, "size": 3}}, {"events": ["2025-12-06T07:58:05.645300Z service:node-proxy [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2025-12-06T07:58:05.620181Z", "running": 0, "size": 0}}, {"events": ["2025-12-06T07:58:35.655692Z service:osd.default_drive_group [INFO] \"service was created\""], "placement": {"hosts": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2025-12-06T07:58:35.648091Z", "last_refresh": "2025-12-06T09:55:36.091987Z", "running": 6, "size": 6}}]}, "changed": false} TASK [ceph_migrate : Print Service Map] **************************************** skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T07:59:52.343312Z service:crash [INFO] "service was created"'], 'placement': {'host_pattern': '*'}, 'service_name': 'crash', 'service_type': 'crash', 'status': {'created': '2025-12-06T07:57:51.164915Z', 'last_refresh': '2025-12-06T09:52:25.266834Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T07:59:52.343312Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2025-12-06T07:57:51.164915Z", "last_refresh": "2025-12-06T09:52:25.266834Z", "running": 6, "size": 6}}} skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T08:18:52.115534Z service:mds.mds [INFO] "service was created"'], 'placement': {'hosts': ['np0005548785.localdomain', 'np0005548786.localdomain', 'np0005548787.localdomain']}, 'service_id': 'mds', 'service_name': 'mds.mds', 'service_type': 'mds', 'status': {'created': '2025-12-06T08:18:43.527187Z', 'last_refresh': '2025-12-06T09:52:25.266956Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T08:18:52.115534Z service:mds.mds [INFO] \"service was created\""], "placement": {"hosts": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2025-12-06T08:18:43.527187Z", "last_refresh": "2025-12-06T09:52:25.266956Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T07:59:41.116828Z service:mgr [INFO] "service was created"', '2025-12-06T07:58:43.928448Z service:mgr [ERROR] "Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts"'], 'placement': {'hosts': ['np0005548785.localdomain', 'np0005548786.localdomain', 'np0005548787.localdomain']}, 'service_name': 'mgr', 'service_type': 'mgr', 'status': {'created': '2025-12-06T07:58:35.639840Z', 'last_refresh': '2025-12-06T09:52:25.266705Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T07:59:41.116828Z service:mgr [INFO] \"service was created\"", "2025-12-06T07:58:43.928448Z service:mgr [ERROR] \"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2025-12-06T07:58:35.639840Z", "last_refresh": "2025-12-06T09:52:25.266705Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T07:59:32.461581Z service:mon [INFO] "service was created"', '2025-12-06T07:58:43.927086Z service:mon [ERROR] "Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts"'], 'placement': {'hosts': ['np0005548785.localdomain', 'np0005548786.localdomain', 'np0005548787.localdomain']}, 'service_name': 'mon', 'service_type': 'mon', 'status': {'created': '2025-12-06T07:58:35.627741Z', 'last_refresh': '2025-12-06T09:52:25.266541Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T07:59:32.461581Z service:mon [INFO] \"service was created\"", "2025-12-06T07:58:43.927086Z service:mon [ERROR] \"Failed to apply: Cannot place on np0005548787.localdomain: Unknown hosts\""], "placement": {"hosts": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2025-12-06T07:58:35.627741Z", "last_refresh": "2025-12-06T09:52:25.266541Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T07:58:05.645300Z service:node-proxy [INFO] "service was created"'], 'placement': {'host_pattern': '*'}, 'service_name': 'node-proxy', 'service_type': 'node-proxy', 'status': {'created': '2025-12-06T07:58:05.620181Z', 'running': 0, 'size': 0}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T07:58:05.645300Z service:node-proxy [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2025-12-06T07:58:05.620181Z", "running": 0, "size": 0}}} skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T07:58:35.655692Z service:osd.default_drive_group [INFO] "service was created"'], 'placement': {'hosts': ['np0005548788.localdomain', 'np0005548789.localdomain', 'np0005548790.localdomain']}, 'service_id': 'default_drive_group', 'service_name': 'osd.default_drive_group', 'service_type': 'osd', 'spec': {'data_devices': {'paths': ['/dev/ceph_vg0/ceph_lv0', '/dev/ceph_vg1/ceph_lv1']}, 'filter_logic': 'AND', 'objectstore': 'bluestore'}, 'status': {'created': '2025-12-06T07:58:35.648091Z', 'last_refresh': '2025-12-06T09:55:36.091987Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T07:58:35.655692Z service:osd.default_drive_group [INFO] \"service was created\""], "placement": {"hosts": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2025-12-06T07:58:35.648091Z", "last_refresh": "2025-12-06T09:55:36.091987Z", "running": 6, "size": 6}}} skipping: [np0005548785.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch ls output to log file] ********************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph config] ****************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "config", "dump", "-f", "json"], "delta": "0:00:02.630294", "end": "2025-12-06 10:01:37.458192", "msg": "", "rc": 0, "start": "2025-12-06 10:01:34.827898", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709082009\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548788\",\"location_type\":\"host\",\"location_value\":\"np0005548788\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548789\",\"location_type\":\"host\",\"location_value\":\"np0005548789\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548790\",\"location_type\":\"host\",\"location_value\":\"np0005548790\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]", "stdout_lines": ["", "[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709082009\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548788\",\"location_type\":\"host\",\"location_value\":\"np0005548788\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548789\",\"location_type\":\"host\",\"location_value\":\"np0005548789\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548790\",\"location_type\":\"host\",\"location_value\":\"np0005548790\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]"]} TASK [ceph_migrate : Print Ceph config dump] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph config dump output to log file] ***************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph Orch Host Map] *********************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "host", "ls", "-f", "json"], "delta": "0:00:02.639139", "end": "2025-12-06 10:01:40.683946", "msg": "", "rc": 0, "start": "2025-12-06 10:01:38.044807", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005548785.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.104\", \"hostname\": \"np0005548786.localdomain\", \"labels\": [\"mon\", \"_admin\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.105\", \"hostname\": \"np0005548787.localdomain\", \"labels\": [\"mon\", \"_admin\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.106\", \"hostname\": \"np0005548788.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005548789.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005548790.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005548785.localdomain\", \"labels\": [\"_admin\", \"mon\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.104\", \"hostname\": \"np0005548786.localdomain\", \"labels\": [\"mon\", \"_admin\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.105\", \"hostname\": \"np0005548787.localdomain\", \"labels\": [\"mon\", \"_admin\", \"mgr\"], \"status\": \"\"}, {\"addr\": \"192.168.122.106\", \"hostname\": \"np0005548788.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005548789.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005548790.localdomain\", \"labels\": [\"osd\"], \"status\": \"\"}]"]} TASK [ceph_migrate : Load nodes] *********************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"nds": [{"addr": "192.168.122.103", "hostname": "np0005548785.localdomain", "labels": ["_admin", "mon", "mgr"], "status": ""}, {"addr": "192.168.122.104", "hostname": "np0005548786.localdomain", "labels": ["mon", "_admin", "mgr"], "status": ""}, {"addr": "192.168.122.105", "hostname": "np0005548787.localdomain", "labels": ["mon", "_admin", "mgr"], "status": ""}, {"addr": "192.168.122.106", "hostname": "np0005548788.localdomain", "labels": ["osd"], "status": ""}, {"addr": "192.168.122.107", "hostname": "np0005548789.localdomain", "labels": ["osd"], "status": ""}, {"addr": "192.168.122.108", "hostname": "np0005548790.localdomain", "labels": ["osd"], "status": ""}]}, "changed": false} TASK [ceph_migrate : Load hostmap List] **************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"hostmap": {"np0005548785.localdomain": ["_admin", "mon", "mgr"], "np0005548786.localdomain": ["mon", "_admin", "mgr"], "np0005548787.localdomain": ["mon", "_admin", "mgr"], "np0005548788.localdomain": ["osd"], "np0005548789.localdomain": ["osd"], "np0005548790.localdomain": ["osd"]}}, "changed": false} TASK [ceph_migrate : Print Host Map] ******************************************* skipping: [np0005548785.localdomain] => (item=np0005548785.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548785.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548786.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548786.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548787.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548787.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548788.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548788.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548789.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548789.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548790.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548790.localdomain"} skipping: [np0005548785.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch host ls output to log file] **************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get Ceph monmap and load data] **************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "mon", "dump", "-f", "json"], "delta": "0:00:02.788279", "end": "2025-12-06 10:01:44.178153", "msg": "", "rc": 0, "start": "2025-12-06 10:01:41.389874", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\ndumped monmap epoch 3", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/mon.np0005548785/config", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "dumped monmap epoch 3"], "stdout": "\n{\"epoch\":3,\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"modified\":\"2025-12-06T07:59:33.837951Z\",\"created\":\"2025-12-06T07:57:14.295835Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005548785\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005548787\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005548786\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}", "stdout_lines": ["", "{\"epoch\":3,\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"modified\":\"2025-12-06T07:59:33.837951Z\",\"created\":\"2025-12-06T07:57:14.295835Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005548785\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005548787\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005548786\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}"]} TASK [ceph_migrate : Get Monmap] *********************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mon_dump": {"created": "2025-12-06T07:57:14.295835Z", "disallowed_leaders: ": "", "election_strategy": 1, "epoch": 3, "features": {"optional": [], "persistent": ["kraken", "luminous", "mimic", "osdmap-prune", "nautilus", "octopus", "pacific", "elector-pinging", "quincy", "reef"]}, "fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "min_mon_release": 18, "min_mon_release_name": "reef", "modified": "2025-12-06T07:59:33.837951Z", "mons": [{"addr": "172.18.0.103:6789/0", "crush_location": "{}", "name": "np0005548785", "priority": 0, "public_addr": "172.18.0.103:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.103:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.103:6789", "nonce": 0, "type": "v1"}]}, "rank": 0, "weight": 0}, {"addr": "172.18.0.105:6789/0", "crush_location": "{}", "name": "np0005548787", "priority": 0, "public_addr": "172.18.0.105:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.105:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.105:6789", "nonce": 0, "type": "v1"}]}, "rank": 1, "weight": 0}, {"addr": "172.18.0.104:6789/0", "crush_location": "{}", "name": "np0005548786", "priority": 0, "public_addr": "172.18.0.104:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.104:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.104:6789", "nonce": 0, "type": "v1"}]}, "rank": 2, "weight": 0}], "quorum": [0, 1, 2], "removed_ranks: ": "", "stretch_mode": false, "tiebreaker_mon": ""}}, "changed": false} TASK [ceph_migrate : Print monmap] ********************************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph mon dump output to log file] ******************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "dump | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Load nodes to decommission] ******************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"decomm_nodes": ["np0005548785.localdomain", "np0005548786.localdomain", "np0005548787.localdomain"]}, "changed": false} TASK [ceph_migrate : Load target nodes] **************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"target_nodes": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "changed": false} TASK [ceph_migrate : Print target nodes] *************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Print decomm_nodes] *************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : ansible.builtin.fail if input is not provided] ************ skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph is undefined or ceph | length == 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get cluster health] *************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if health is HEALTH_WARN || HEALTH_ERR] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph.health.status == 'HEALTH_WARN' or ceph.health.status == 'HEALTH_ERR'", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : PgMap] **************************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if PGs are not in active+clean state] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "pgstate != 'active+clean'", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : OSDMap] *************************************************** ok: [np0005548785.localdomain] => { "msg": "100.0" } TASK [ceph_migrate : ansible.builtin.fail if there is an unacceptable OSDs number] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "pct | float < 100", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MonMap] *************************************************** skipping: [np0005548785.localdomain] => {"false_condition": "check_ceph_release | default(false) | bool"} TASK [ceph_migrate : ansible.builtin.fail if Ceph <= Quincy] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "check_ceph_release | default(false) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Mons in quorum] ******************************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if Mons are not in quorum] *********** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph.monmap.num_mons < decomm_nodes | length", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : is Ceph Mgr available] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if Mgr is not available] ************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not ceph.mgrmap.available | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : in progress events] *************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : ansible.builtin.fail if there are in progress events] ***** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph.progress_events | length > 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Dump Ceph Status] ***************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : set container image base in ceph configuration] *********** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_base", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest"], "delta": "0:00:00.674096", "end": "2025-12-06 10:01:46.438833", "msg": "", "rc": 0, "start": "2025-12-06 10:01:45.764737", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : set alertmanager container image in ceph configuration] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set grafana container image in ceph configuration] ******** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set node-exporter container image in ceph configuration] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : set prometheus container image in ceph configuration] ***** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set haproxy container image in ceph configuration] ******** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_haproxy", "registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest"], "delta": "0:00:00.720661", "end": "2025-12-06 10:01:47.909642", "msg": "", "rc": 0, "start": "2025-12-06 10:01:47.188981", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Set keepalived container image in ceph configuration] ***** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "config", "set", "mgr", "mgr/cephadm/container_image_keepalived", "registry.redhat.io/rhceph/keepalived-rhel9:latest"], "delta": "0:00:00.776992", "end": "2025-12-06 10:01:49.313407", "msg": "", "rc": 0, "start": "2025-12-06 10:01:48.536415", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Update firewall rules on the target nodes] **************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005548785.localdomain => (item=np0005548788.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005548785.localdomain => (item=np0005548789.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/firewall.yaml for np0005548785.localdomain => (item=np0005548790.localdomain) TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005548785.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005548785.localdomain -> np0005548788.localdomain(192.168.122.106)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005548785.localdomain -> np0005548788.localdomain(192.168.122.106)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2025-12-06 08:09:10 UTC", "ActiveEnterTimestampMonotonic": "4971063895", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket sysinit.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-12-06 08:09:10 UTC", "AssertTimestampMonotonic": "4970962995", "Before": "network-pre.target multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "35756000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-12-06 08:09:10 UTC", "ConditionTimestampMonotonic": "4970962993", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2025-12-06 08:09:10 UTC", "ExecMainExitTimestampMonotonic": "4971063679", "ExecMainPID": "42498", "ExecMainStartTimestamp": "Sat 2025-12-06 08:09:10 UTC", "ExecMainStartTimestampMonotonic": "4970975307", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-12-06 08:09:10 UTC", "InactiveExitTimestampMonotonic": "4970975657", "InvocationID": "028c7d6dce1043bab48475439aef2e48", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-12-06 08:09:10 UTC", "StateChangeTimestampMonotonic": "4971063895", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005548785.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005548785.localdomain -> np0005548789.localdomain(192.168.122.107)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005548785.localdomain -> np0005548789.localdomain(192.168.122.107)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2025-12-06 08:09:11 UTC", "ActiveEnterTimestampMonotonic": "4969511763", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice sysinit.target basic.target systemd-journald.socket", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-12-06 08:09:11 UTC", "AssertTimestampMonotonic": "4969437209", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "22967000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-12-06 08:09:11 UTC", "ConditionTimestampMonotonic": "4969437206", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2025-12-06 08:09:11 UTC", "ExecMainExitTimestampMonotonic": "4969511378", "ExecMainPID": "42263", "ExecMainStartTimestamp": "Sat 2025-12-06 08:09:11 UTC", "ExecMainStartTimestampMonotonic": "4969450253", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-12-06 08:09:11 UTC", "InactiveExitTimestampMonotonic": "4969450560", "InvocationID": "b84ada756951492eb27fead3fe4cf44e", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-12-06 08:09:11 UTC", "StateChangeTimestampMonotonic": "4969511763", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (iptables)] *** skipping: [np0005548785.localdomain] => (item=/etc/sysconfig/iptables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/iptables", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=/etc/sysconfig/ip6tables) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "item": "/etc/sysconfig/ip6tables", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Ensure firewall is enabled/started - iptables] ************ skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_firewall_type == \"iptables\"", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Ceph Migration - Apply the Ceph cluster rules (nftables)] *** changed: [np0005548785.localdomain -> np0005548790.localdomain(192.168.122.108)] => {"changed": true, "msg": "Block inserted and ownership, perms or SE linux context changed"} TASK [ceph_migrate : Ensure firewall is enabled/started - nftables] ************ changed: [np0005548785.localdomain -> np0005548790.localdomain(192.168.122.108)] => {"changed": true, "enabled": true, "name": "nftables", "state": "started", "status": {"AccessSELinuxContext": "system_u:object_r:systemd_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2025-12-06 08:09:11 UTC", "ActiveEnterTimestampMonotonic": "4966274490", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice sysinit.target systemd-journald.socket basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-12-06 08:09:10 UTC", "AssertTimestampMonotonic": "4966179057", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "31097000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-12-06 08:09:10 UTC", "ConditionTimestampMonotonic": "4966179055", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Netfilter Tables", "DevicePolicy": "auto", "Documentation": "\"man:nft(8)\"", "DynamicUser": "no", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2025-12-06 08:09:11 UTC", "ExecMainExitTimestampMonotonic": "4966274184", "ExecMainPID": "42276", "ExecMainStartTimestamp": "Sat 2025-12-06 08:09:10 UTC", "ExecMainStartTimestampMonotonic": "4966191123", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset; include \"/etc/sysconfig/nftables.conf\"; ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/sbin/nft ; argv[]=/sbin/nft -f /etc/sysconfig/nftables.conf ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/sbin/nft ; argv[]=/sbin/nft flush ruleset ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/nftables.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "yes", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "nftables.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-12-06 08:09:10 UTC", "InactiveExitTimestampMonotonic": "4966191301", "InvocationID": "2c1688fa70e7458a95e008d9c7ba61a6", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "62638", "LimitNPROCSoft": "62638", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "62638", "LimitSIGPENDINGSoft": "62638", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "nftables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "yes", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "full", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-12-06 08:09:11 UTC", "StateChangeTimestampMonotonic": "4966274490", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "100220", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} TASK [ceph_migrate : Get ceph_cli] ********************************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the dashboard port] *********************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the dashboard ssl port] ******************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Disable mgr dashboard module (restart)] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Enable mgr dashboard module (restart)] ******************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the prometheus server port] *************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set the prometheus server address] ************************ skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Enable prometheus module] ********************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => (item=['np0005548788.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005548788.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=['np0005548789.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005548789.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=['np0005548790.localdomain', 'monitoring']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": ["np0005548790.localdomain", "monitoring"], "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : MONITORING - Load Spec from the orchestrator] ************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the loaded data] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : Update the Monitoring Stack spec definition] ************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.monitoring | default(true) | bool"} TASK [ceph_migrate : MONITORING - wait daemons] ******************************** skipping: [np0005548785.localdomain] => (item=grafana) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "grafana", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=prometheus) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "prometheus", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=alertmanager) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "item": "alertmanager", "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : Sleep before moving to the next daemon] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.monitoring | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MDS - Load Spec from the orchestrator] ******************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mds_spec": {"service_name": "mds.mds", "service_type": "mds", "spec": {}}}, "changed": false} TASK [ceph_migrate : Print the resulting MDS spec] ***************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548785.localdomain", "mds"], "delta": "0:00:00.877434", "end": "2025-12-06 10:02:00.455645", "item": ["np0005548785.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:01:59.578211", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005548785.localdomain", "stdout_lines": ["Added label mds to host np0005548785.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548786.localdomain", "mds"], "delta": "0:00:00.730732", "end": "2025-12-06 10:02:01.769663", "item": ["np0005548786.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:01.038931", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005548786.localdomain", "stdout_lines": ["Added label mds to host np0005548786.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548787.localdomain", "mds"], "delta": "0:00:00.678332", "end": "2025-12-06 10:02:03.009577", "item": ["np0005548787.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:02.331245", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005548787.localdomain", "stdout_lines": ["Added label mds to host np0005548787.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548788.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548788.localdomain", "mds"], "delta": "0:00:00.735180", "end": "2025-12-06 10:02:04.286397", "item": ["np0005548788.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:03.551217", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005548788.localdomain", "stdout_lines": ["Added label mds to host np0005548788.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548789.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548789.localdomain", "mds"], "delta": "0:00:00.788467", "end": "2025-12-06 10:02:05.579028", "item": ["np0005548789.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:04.790561", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005548789.localdomain", "stdout_lines": ["Added label mds to host np0005548789.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548790.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548790.localdomain", "mds"], "delta": "0:00:00.687386", "end": "2025-12-06 10:02:06.844035", "item": ["np0005548790.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:06.156649", "stderr": "", "stderr_lines": [], "stdout": "Added label mds to host np0005548790.localdomain", "stdout_lines": ["Added label mds to host np0005548790.localdomain"]} TASK [ceph_migrate : Update the MDS Daemon spec definition] ******************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mds:/home/tripleo-admin/mds:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mds"], "delta": "0:00:00.694303", "end": "2025-12-06 10:02:08.244927", "rc": 0, "start": "2025-12-06 10:02:07.550624", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mds.mds update...", "stdout_lines": ["Scheduled mds.mds update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Wait for the orchestrator to process the spec] ************ Pausing for 30 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2025-12-06 10:02:08.392093", "stderr": "", "stdout": "Paused for 30.03 seconds", "stop": "2025-12-06 10:02:38.421727", "user_input": ""} TASK [ceph_migrate : Reload the updated mdsmap] ******************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "fs", "status", "cephfs", "-f", "json"], "delta": "0:00:00.711757", "end": "2025-12-06 10:02:39.662523", "msg": "", "rc": 0, "start": "2025-12-06 10:02:38.950766", "stderr": "", "stderr_lines": [], "stdout": "\n{\"clients\": [{\"clients\": 0, \"fs\": \"cephfs\"}], \"mds_version\": [{\"daemon\": [\"mds.np0005548787.rhwfmx\", \"mds.np0005548785.sozsvf\", \"mds.np0005548789.vxwwsq\", \"mds.np0005548788.erzujf\", \"mds.np0005548786.upwgxy\", \"mds.np0005548790.vhcezv\"], \"version\": \"ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable)\"}], \"mdsmap\": [{\"caps\": 0, \"dirs\": 12, \"dns\": 10, \"inos\": 13, \"name\": \"mds.np0005548787.rhwfmx\", \"rank\": 0, \"rate\": 0, \"state\": \"active\"}, {\"name\": \"mds.np0005548785.sozsvf\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548789.vxwwsq\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548788.erzujf\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548786.upwgxy\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548790.vhcezv\", \"state\": \"standby\"}], \"pools\": [{\"avail\": 14014952448, \"id\": 7, \"name\": \"manila_metadata\", \"type\": \"metadata\", \"used\": 98304}, {\"avail\": 14014952448, \"id\": 6, \"name\": \"manila_data\", \"type\": \"data\", \"used\": 0}]}", "stdout_lines": ["", "{\"clients\": [{\"clients\": 0, \"fs\": \"cephfs\"}], \"mds_version\": [{\"daemon\": [\"mds.np0005548787.rhwfmx\", \"mds.np0005548785.sozsvf\", \"mds.np0005548789.vxwwsq\", \"mds.np0005548788.erzujf\", \"mds.np0005548786.upwgxy\", \"mds.np0005548790.vhcezv\"], \"version\": \"ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable)\"}], \"mdsmap\": [{\"caps\": 0, \"dirs\": 12, \"dns\": 10, \"inos\": 13, \"name\": \"mds.np0005548787.rhwfmx\", \"rank\": 0, \"rate\": 0, \"state\": \"active\"}, {\"name\": \"mds.np0005548785.sozsvf\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548789.vxwwsq\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548788.erzujf\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548786.upwgxy\", \"state\": \"standby\"}, {\"name\": \"mds.np0005548790.vhcezv\", \"state\": \"standby\"}], \"pools\": [{\"avail\": 14014952448, \"id\": 7, \"name\": \"manila_metadata\", \"type\": \"metadata\", \"used\": 98304}, {\"avail\": 14014952448, \"id\": 6, \"name\": \"manila_data\", \"type\": \"data\", \"used\": 0}]}"]} TASK [ceph_migrate : Get MDS Daemons] ****************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mds_daemons": {"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["mds.np0005548787.rhwfmx", "mds.np0005548785.sozsvf", "mds.np0005548789.vxwwsq", "mds.np0005548788.erzujf", "mds.np0005548786.upwgxy", "mds.np0005548790.vhcezv"], "version": "ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "mds.np0005548787.rhwfmx", "rank": 0, "rate": 0, "state": "active"}, {"name": "mds.np0005548785.sozsvf", "state": "standby"}, {"name": "mds.np0005548789.vxwwsq", "state": "standby"}, {"name": "mds.np0005548788.erzujf", "state": "standby"}, {"name": "mds.np0005548786.upwgxy", "state": "standby"}, {"name": "mds.np0005548790.vhcezv", "state": "standby"}], "pools": [{"avail": 14014952448, "id": 7, "name": "manila_metadata", "type": "metadata", "used": 98304}, {"avail": 14014952448, "id": 6, "name": "manila_data", "type": "data", "used": 0}]}}, "changed": false} TASK [ceph_migrate : Print Daemons] ******************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Get MDS daemons that are not part of decomm nodes] ******** skipping: [np0005548785.localdomain] => (item={'caps': 0, 'dirs': 12, 'dns': 10, 'inos': 13, 'name': 'mds.np0005548787.rhwfmx', 'rank': 0, 'rate': 0, 'state': 'active'}) => {"ansible_loop_var": "item", "changed": false, "false_condition": "item.state == \"standby\"", "item": {"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "mds.np0005548787.rhwfmx", "rank": 0, "rate": 0, "state": "active"}, "skip_reason": "Conditional result was False"} ok: [np0005548785.localdomain] => (item={'name': 'mds.np0005548785.sozsvf', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005548785.sozsvf", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005548785.sozsvf", "state": "standby"}} ok: [np0005548785.localdomain] => (item={'name': 'mds.np0005548789.vxwwsq', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005548789.vxwwsq", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005548789.vxwwsq", "state": "standby"}} ok: [np0005548785.localdomain] => (item={'name': 'mds.np0005548788.erzujf', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005548788.erzujf", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005548788.erzujf", "state": "standby"}} ok: [np0005548785.localdomain] => (item={'name': 'mds.np0005548786.upwgxy', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005548786.upwgxy", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005548786.upwgxy", "state": "standby"}} ok: [np0005548785.localdomain] => (item={'name': 'mds.np0005548790.vhcezv', 'state': 'standby'}) => {"ansible_facts": {"mds_aff_daemon": {"name": "mds.np0005548790.vhcezv", "state": "standby"}}, "ansible_loop_var": "item", "changed": false, "item": {"name": "mds.np0005548790.vhcezv", "state": "standby"}} TASK [ceph_migrate : Affinity daemon selected] ********************************* ok: [np0005548785.localdomain] => { "msg": { "name": "mds.np0005548790.vhcezv", "state": "standby" } } TASK [ceph_migrate : Set MDS affinity] ***************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring config set mds.np0005548790.vhcezv mds_join_fs cephfs", "delta": "0:00:00.814121", "end": "2025-12-06 10:02:41.343984", "msg": "", "rc": 0, "start": "2025-12-06 10:02:40.529863", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548785.localdomain", "mds"], "delta": "0:00:00.732920", "end": "2025-12-06 10:02:42.770332", "item": ["np0005548785.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:42.037412", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005548785.localdomain", "stdout_lines": ["Removed label mds from host np0005548785.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548786.localdomain", "mds"], "delta": "0:00:00.677820", "end": "2025-12-06 10:02:44.043410", "item": ["np0005548786.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:43.365590", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005548786.localdomain", "stdout_lines": ["Removed label mds from host np0005548786.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', 'mds']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548787.localdomain", "mds"], "delta": "0:00:00.648573", "end": "2025-12-06 10:02:45.218087", "item": ["np0005548787.localdomain", "mds"], "msg": "", "rc": 0, "start": "2025-12-06 10:02:44.569514", "stderr": "", "stderr_lines": [], "stdout": "Removed label mds from host np0005548787.localdomain", "stdout_lines": ["Removed label mds from host np0005548787.localdomain"]} TASK [ceph_migrate : Wait daemons] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mds] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mds", "-f", "json"], "delta": "0:00:00.813927", "end": "2025-12-06 10:02:46.781599", "msg": "", "rc": 0, "start": "2025-12-06 10:02:45.967672", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"717814a4cda1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2025-12-06T08:18:49.864104Z\", \"daemon_id\": \"mds.np0005548785.sozsvf\", \"daemon_name\": \"mds.mds.np0005548785.sozsvf\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T08:18:49.933128Z daemon:mds.mds.np0005548785.sozsvf [INFO] \\\"Deployed mds.mds.np0005548785.sozsvf on host 'np0005548785.localdomain'\\\"\"], \"hostname\": \"np0005548785.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:45.142963Z\", \"memory_usage\": 26015170, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T08:18:49.773769Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"41e4392bc3b8\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2025-12-06T08:18:51.984864Z\", \"daemon_id\": \"mds.np0005548786.upwgxy\", \"daemon_name\": \"mds.mds.np0005548786.upwgxy\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T08:18:52.074476Z daemon:mds.mds.np0005548786.upwgxy [INFO] \\\"Deployed mds.mds.np0005548786.upwgxy on host 'np0005548786.localdomain'\\\"\"], \"hostname\": \"np0005548786.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:44.724931Z\", \"memory_usage\": 28384952, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T08:18:51.897815Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"5af746386f79\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.16%\", \"created\": \"2025-12-06T08:18:47.696777Z\", \"daemon_id\": \"mds.np0005548787.rhwfmx\", \"daemon_name\": \"mds.mds.np0005548787.rhwfmx\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T08:18:47.782357Z daemon:mds.mds.np0005548787.rhwfmx [INFO] \\\"Deployed mds.mds.np0005548787.rhwfmx on host 'np0005548787.localdomain'\\\"\"], \"hostname\": \"np0005548787.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:44.330087Z\", \"memory_usage\": 26633830, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T08:18:47.588797Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"381108e5c18c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"5.75%\", \"created\": \"2025-12-06T10:02:15.171675Z\", \"daemon_id\": \"mds.np0005548788.erzujf\", \"daemon_name\": \"mds.mds.np0005548788.erzujf\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T10:02:15.244227Z daemon:mds.mds.np0005548788.erzujf [INFO] \\\"Deployed mds.mds.np0005548788.erzujf on host 'np0005548788.localdomain'\\\"\"], \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:17.388771Z\", \"memory_usage\": 14355005, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T10:02:15.072065Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"96c037b833c1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.02%\", \"created\": \"2025-12-06T10:02:12.785592Z\", \"daemon_id\": \"mds.np0005548789.vxwwsq\", \"daemon_name\": \"mds.mds.np0005548789.vxwwsq\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T10:02:12.873827Z daemon:mds.mds.np0005548789.vxwwsq [INFO] \\\"Deployed mds.mds.np0005548789.vxwwsq on host 'np0005548789.localdomain'\\\"\"], \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:17.384370Z\", \"memory_usage\": 13170114, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T10:02:12.688539Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"c9df1b3b889e\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.40%\", \"created\": \"2025-12-06T10:02:10.370803Z\", \"daemon_id\": \"mds.np0005548790.vhcezv\", \"daemon_name\": \"mds.mds.np0005548790.vhcezv\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T10:02:10.441910Z daemon:mds.mds.np0005548790.vhcezv [INFO] \\\"Deployed mds.mds.np0005548790.vhcezv on host 'np0005548790.localdomain'\\\"\"], \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:17.179405Z\", \"memory_usage\": 14323548, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T10:02:10.270508Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"717814a4cda1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2025-12-06T08:18:49.864104Z\", \"daemon_id\": \"mds.np0005548785.sozsvf\", \"daemon_name\": \"mds.mds.np0005548785.sozsvf\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T08:18:49.933128Z daemon:mds.mds.np0005548785.sozsvf [INFO] \\\"Deployed mds.mds.np0005548785.sozsvf on host 'np0005548785.localdomain'\\\"\"], \"hostname\": \"np0005548785.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:45.142963Z\", \"memory_usage\": 26015170, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T08:18:49.773769Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"41e4392bc3b8\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.04%\", \"created\": \"2025-12-06T08:18:51.984864Z\", \"daemon_id\": \"mds.np0005548786.upwgxy\", \"daemon_name\": \"mds.mds.np0005548786.upwgxy\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T08:18:52.074476Z daemon:mds.mds.np0005548786.upwgxy [INFO] \\\"Deployed mds.mds.np0005548786.upwgxy on host 'np0005548786.localdomain'\\\"\"], \"hostname\": \"np0005548786.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:44.724931Z\", \"memory_usage\": 28384952, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T08:18:51.897815Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"5af746386f79\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.16%\", \"created\": \"2025-12-06T08:18:47.696777Z\", \"daemon_id\": \"mds.np0005548787.rhwfmx\", \"daemon_name\": \"mds.mds.np0005548787.rhwfmx\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T08:18:47.782357Z daemon:mds.mds.np0005548787.rhwfmx [INFO] \\\"Deployed mds.mds.np0005548787.rhwfmx on host 'np0005548787.localdomain'\\\"\"], \"hostname\": \"np0005548787.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:44.330087Z\", \"memory_usage\": 26633830, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T08:18:47.588797Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"381108e5c18c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"5.75%\", \"created\": \"2025-12-06T10:02:15.171675Z\", \"daemon_id\": \"mds.np0005548788.erzujf\", \"daemon_name\": \"mds.mds.np0005548788.erzujf\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T10:02:15.244227Z daemon:mds.mds.np0005548788.erzujf [INFO] \\\"Deployed mds.mds.np0005548788.erzujf on host 'np0005548788.localdomain'\\\"\"], \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:17.388771Z\", \"memory_usage\": 14355005, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T10:02:15.072065Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"96c037b833c1\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.02%\", \"created\": \"2025-12-06T10:02:12.785592Z\", \"daemon_id\": \"mds.np0005548789.vxwwsq\", \"daemon_name\": \"mds.mds.np0005548789.vxwwsq\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T10:02:12.873827Z daemon:mds.mds.np0005548789.vxwwsq [INFO] \\\"Deployed mds.mds.np0005548789.vxwwsq on host 'np0005548789.localdomain'\\\"\"], \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:17.384370Z\", \"memory_usage\": 13170114, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T10:02:12.688539Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"c9df1b3b889e\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.40%\", \"created\": \"2025-12-06T10:02:10.370803Z\", \"daemon_id\": \"mds.np0005548790.vhcezv\", \"daemon_name\": \"mds.mds.np0005548790.vhcezv\", \"daemon_type\": \"mds\", \"events\": [\"2025-12-06T10:02:10.441910Z daemon:mds.mds.np0005548790.vhcezv [INFO] \\\"Deployed mds.mds.np0005548790.vhcezv on host 'np0005548790.localdomain'\\\"\"], \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:02:17.179405Z\", \"memory_usage\": 14323548, \"ports\": [], \"service_name\": \"mds.mds\", \"started\": \"2025-12-06T10:02:10.270508Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : Sleep before moving to the next phase] ******************** Pausing for 30 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2025-12-06 10:02:46.984779", "stderr": "", "stdout": "Paused for 30.03 seconds", "stop": "2025-12-06 10:03:17.012427", "user_input": ""} TASK [ceph_migrate : Get ceph_cli] ********************************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if RGW VIPs are not defined] ************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => (item=['np0005548788.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005548788.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=['np0005548789.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005548789.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => (item=['np0005548790.localdomain', 'rgw']) => {"ansible_loop_var": "item", "changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "item": ["np0005548790.localdomain", "rgw"], "skip_reason": "Conditional result was False"} skipping: [np0005548785.localdomain] => {"changed": false, "msg": "All items skipped"} TASK [ceph_migrate : RGW - Load Spec from the orchestrator] ******************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the loaded data] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Apply ceph rgw keystone config] *************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Update the RGW spec definition] *************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Create the Ingress Daemon spec definition for RGW] ******** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "ceph_daemons_layout.rgw | default(true) | bool"} TASK [ceph_migrate : Wait for cephadm to redeploy] ***************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : RGW - wait daemons] *************************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Setup a Ceph client to the first node] ******************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_client.yaml for np0005548785.localdomain TASK [ceph_migrate : TMP_CLIENT - Patch os-net-config config and setup a tmp client IP] *** changed: [np0005548785.localdomain] => {"backup": "/etc/os-net-config/tripleo_config.yaml.473664.2025-12-06@10:03:18~", "changed": true, "msg": "line added and ownership, perms or SE linux context changed"} TASK [ceph_migrate : TMP_CLIENT - Refresh os-net-config] *********************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["os-net-config", "-c", "/etc/os-net-config/tripleo_config.yaml"], "delta": "0:00:07.322142", "end": "2025-12-06 10:03:26.206000", "msg": "", "rc": 0, "start": "2025-12-06 10:03:18.883858", "stderr": "", "stderr_lines": [], "stdout": "2025-12-06 10:03:19.786 ERROR os_net_config.execute stderr : WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.\nWARN : [ifdown] 'network-scripts' will be removed from distribution in near future.\nWARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.\n\n2025-12-06 10:03:26.140 ERROR os_net_config.execute stderr : WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated.\nWARN : [ifup] 'network-scripts' will be removed from distribution in near future.\nWARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.", "stdout_lines": ["2025-12-06 10:03:19.786 ERROR os_net_config.execute stderr : WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.", "WARN : [ifdown] 'network-scripts' will be removed from distribution in near future.", "WARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.", "", "2025-12-06 10:03:26.140 ERROR os_net_config.execute stderr : WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated.", "WARN : [ifup] 'network-scripts' will be removed from distribution in near future.", "WARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well."]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005548785.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005548785.localdomain] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005548785.localdomain] => {"changed": false, "examined": 4, "files": [{"atime": 1765009097.9261138, "ctime": 1765009096.9190826, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 67109774, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765008020.618614, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765009097.937114, "ctime": 1765009096.9190826, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 67109773, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1765007879.1861506, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765009135.615282, "ctime": 1765009133.61122, "dev": 64516, "gid": 167, "gr_name": "", "inode": 645926251, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765009133.328211, "nlink": 1, "path": "/etc/ceph/ceph.client.openstack.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765009136.7633176, "ctime": 1765009134.5322485, "dev": 64516, "gid": 167, "gr_name": "", "inode": 687904793, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765009134.2082384, "nlink": 1, "path": "/etc/ceph/ceph.client.manila.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 4, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005548785.localdomain] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 271, 'inode': 67109774, 'dev': 64516, 'nlink': 1, 'atime': 1765009097.9261138, 'mtime': 1765008020.618614, 'ctime': 1765009096.9190826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "74b6793c28400fa0a16ce9abdc4efa82feeb961d", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1765009097.9261138, "ctime": 1765009096.9190826, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 67109774, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765008020.618614, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 271, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "d8c198f6d13a8a8f30c6e29fa9736cff", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 271, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005548785.localdomain] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 67109773, 'dev': 64516, 'nlink': 1, 'atime': 1765009097.937114, 'mtime': 1765007879.1861506, 'ctime': 1765009096.9190826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "904318df38203f10f7fb10e4cc9586ba0770617d", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1765009097.937114, "ctime": 1765009096.9190826, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 67109773, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1765007879.1861506, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "7352b81dd3becac122410b306a619c64", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} changed: [np0005548785.localdomain] => (item={'path': '/etc/ceph/ceph.client.openstack.keyring', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 167, 'gid': 167, 'size': 231, 'inode': 645926251, 'dev': 64516, 'nlink': 1, 'atime': 1765009135.615282, 'mtime': 1765009133.328211, 'ctime': 1765009133.61122, 'gr_name': '', 'pw_name': '', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "9d631b6552ddeaa0e75a39b18f2bdb583e0e85e3", "dest": "/home/tripleo-admin/ceph_client/ceph.client.openstack.keyring", "gid": 0, "group": "root", "item": {"atime": 1765009135.615282, "ctime": 1765009133.61122, "dev": 64516, "gid": 167, "gr_name": "", "inode": 645926251, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765009133.328211, "nlink": 1, "path": "/etc/ceph/ceph.client.openstack.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "8bd5fd93a11a1a35c644f191949b98e8", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 231, "src": "/etc/ceph/ceph.client.openstack.keyring", "state": "file", "uid": 0} changed: [np0005548785.localdomain] => (item={'path': '/etc/ceph/ceph.client.manila.keyring', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 167, 'gid': 167, 'size': 153, 'inode': 687904793, 'dev': 64516, 'nlink': 1, 'atime': 1765009136.7633176, 'mtime': 1765009134.2082384, 'ctime': 1765009134.5322485, 'gr_name': '', 'pw_name': '', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "04fcaa63c42fa3b2b702e4421ebc774041538ebd", "dest": "/home/tripleo-admin/ceph_client/ceph.client.manila.keyring", "gid": 0, "group": "root", "item": {"atime": 1765009136.7633176, "ctime": 1765009134.5322485, "dev": 64516, "gid": 167, "gr_name": "", "inode": 687904793, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765009134.2082384, "nlink": 1, "path": "/etc/ceph/ceph.client.manila.keyring", "pw_name": "", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 167, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "ad860f30d5187933e91a8976fb151a8a", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 153, "src": "/etc/ceph/ceph.client.manila.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : Render global ceph.conf] ********************************** changed: [np0005548785.localdomain] => {"changed": true, "checksum": "268185d9e6258eec13ecf92b158b41045a26873f", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "md5sum": "a8842acf8fb69e0f3f6d14c97d137d87", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 142, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015410.2304945-60385-207835140425871/source", "state": "file", "uid": 0} TASK [ceph_migrate : MGR - Migrate RBD node] *********************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mgr.yaml for np0005548785.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /etc/ceph:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MGR - Setup Mon/Mgr label to the target node] ************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005548785.localdomain TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005548785.localdomain] => (item=['np0005548788.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548788.localdomain", "mgr"], "delta": "0:00:00.759387", "end": "2025-12-06 10:03:33.228307", "item": ["np0005548788.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:32.468920", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005548788.localdomain", "stdout_lines": ["Added label mgr to host np0005548788.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548789.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548789.localdomain", "mgr"], "delta": "0:00:00.746766", "end": "2025-12-06 10:03:34.558489", "item": ["np0005548789.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:33.811723", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005548789.localdomain", "stdout_lines": ["Added label mgr to host np0005548789.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548790.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548790.localdomain", "mgr"], "delta": "0:00:00.819728", "end": "2025-12-06 10:03:35.958098", "item": ["np0005548790.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:35.138370", "stderr": "", "stderr_lines": [], "stdout": "Added label mgr to host np0005548790.localdomain", "stdout_lines": ["Added label mgr to host np0005548790.localdomain"]} TASK [ceph_migrate : MGR - Load Spec from the orchestrator] ******************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mgr_spec": {"service_name": "mgr", "service_type": "mgr", "spec": {}}}, "changed": false} TASK [ceph_migrate : Update the MGR Daemon spec definition] ******************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mgr:/home/tripleo-admin/mgr:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mgr"], "delta": "0:00:00.747722", "end": "2025-12-06 10:03:37.402382", "rc": 0, "start": "2025-12-06 10:03:36.654660", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mgr update...", "stdout_lines": ["Scheduled mgr update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MGR - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mgr] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mgr", "-f", "json"], "delta": "0:00:00.661423", "end": "2025-12-06 10:03:38.794142", "msg": "", "rc": 0, "start": "2025-12-06 10:03:38.132719", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"676af8e74375\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.28%\", \"created\": \"2025-12-06T07:57:21.242945Z\", \"daemon_id\": \"np0005548785.vhqlsq\", \"daemon_name\": \"mgr.np0005548785.vhqlsq\", \"daemon_type\": \"mgr\", \"events\": [\"2025-12-06T08:00:23.767545Z daemon:mgr.np0005548785.vhqlsq [INFO] \\\"Reconfigured mgr.np0005548785.vhqlsq on host 'np0005548785.localdomain'\\\"\"], \"hostname\": \"np0005548785.localdomain\", \"is_active\": true, \"last_refresh\": \"2025-12-06T10:03:10.113172Z\", \"memory_usage\": 545049804, \"ports\": [9283, 8765], \"service_name\": \"mgr\", \"started\": \"2025-12-06T07:57:21.079127Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"fb528f0c455f\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.18%\", \"created\": \"2025-12-06T07:59:40.983107Z\", \"daemon_id\": \"np0005548786.mczynb\", \"daemon_name\": \"mgr.np0005548786.mczynb\", \"daemon_type\": \"mgr\", \"events\": [\"2025-12-06T07:59:41.091853Z daemon:mgr.np0005548786.mczynb [INFO] \\\"Deployed mgr.np0005548786.mczynb on host 'np0005548786.localdomain'\\\"\"], \"hostname\": \"np0005548786.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:03:10.133107Z\", \"memory_usage\": 475948646, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2025-12-06T07:59:40.827875Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"8f97e9d12690\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2025-12-06T07:59:34.366736Z\", \"daemon_id\": \"np0005548787.umwsra\", \"daemon_name\": \"mgr.np0005548787.umwsra\", \"daemon_type\": \"mgr\", \"events\": [\"2025-12-06T07:59:38.958115Z daemon:mgr.np0005548787.umwsra [INFO] \\\"Deployed mgr.np0005548787.umwsra on host 'np0005548787.localdomain'\\\"\", \"2025-12-06T08:00:27.893545Z daemon:mgr.np0005548787.umwsra [INFO] \\\"Reconfigured mgr.np0005548787.umwsra on host 'np0005548787.localdomain'\\\"\"], \"hostname\": \"np0005548787.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:03:09.852854Z\", \"memory_usage\": 476053504, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2025-12-06T07:59:34.238493Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"676af8e74375\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.28%\", \"created\": \"2025-12-06T07:57:21.242945Z\", \"daemon_id\": \"np0005548785.vhqlsq\", \"daemon_name\": \"mgr.np0005548785.vhqlsq\", \"daemon_type\": \"mgr\", \"events\": [\"2025-12-06T08:00:23.767545Z daemon:mgr.np0005548785.vhqlsq [INFO] \\\"Reconfigured mgr.np0005548785.vhqlsq on host 'np0005548785.localdomain'\\\"\"], \"hostname\": \"np0005548785.localdomain\", \"is_active\": true, \"last_refresh\": \"2025-12-06T10:03:10.113172Z\", \"memory_usage\": 545049804, \"ports\": [9283, 8765], \"service_name\": \"mgr\", \"started\": \"2025-12-06T07:57:21.079127Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"fb528f0c455f\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.18%\", \"created\": \"2025-12-06T07:59:40.983107Z\", \"daemon_id\": \"np0005548786.mczynb\", \"daemon_name\": \"mgr.np0005548786.mczynb\", \"daemon_type\": \"mgr\", \"events\": [\"2025-12-06T07:59:41.091853Z daemon:mgr.np0005548786.mczynb [INFO] \\\"Deployed mgr.np0005548786.mczynb on host 'np0005548786.localdomain'\\\"\"], \"hostname\": \"np0005548786.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:03:10.133107Z\", \"memory_usage\": 475948646, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2025-12-06T07:59:40.827875Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}, {\"container_id\": \"8f97e9d12690\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.19%\", \"created\": \"2025-12-06T07:59:34.366736Z\", \"daemon_id\": \"np0005548787.umwsra\", \"daemon_name\": \"mgr.np0005548787.umwsra\", \"daemon_type\": \"mgr\", \"events\": [\"2025-12-06T07:59:38.958115Z daemon:mgr.np0005548787.umwsra [INFO] \\\"Deployed mgr.np0005548787.umwsra on host 'np0005548787.localdomain'\\\"\", \"2025-12-06T08:00:27.893545Z daemon:mgr.np0005548787.umwsra [INFO] \\\"Reconfigured mgr.np0005548787.umwsra on host 'np0005548787.localdomain'\\\"\"], \"hostname\": \"np0005548787.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:03:09.852854Z\", \"memory_usage\": 476053504, \"ports\": [8765], \"service_name\": \"mgr\", \"started\": \"2025-12-06T07:59:34.238493Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : MON - Load Spec from the orchestrator] ******************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mon_spec": {"service_name": "mon", "service_type": "mon", "spec": {}}}, "changed": false} TASK [ceph_migrate : Set/Unset labels - add] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - add] *********************************** changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548785.localdomain", "mon"], "delta": "0:00:00.716677", "end": "2025-12-06 10:03:40.228896", "item": ["np0005548785.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:39.512219", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005548785.localdomain", "stdout_lines": ["Added label mon to host np0005548785.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548785.localdomain", "_admin"], "delta": "0:00:00.723678", "end": "2025-12-06 10:03:41.561059", "item": ["np0005548785.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:40.837381", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005548785.localdomain", "stdout_lines": ["Added label _admin to host np0005548785.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548786.localdomain", "mon"], "delta": "0:00:00.804105", "end": "2025-12-06 10:03:42.974194", "item": ["np0005548786.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:42.170089", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005548786.localdomain", "stdout_lines": ["Added label mon to host np0005548786.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548786.localdomain", "_admin"], "delta": "0:00:00.798684", "end": "2025-12-06 10:03:44.373844", "item": ["np0005548786.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:43.575160", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005548786.localdomain", "stdout_lines": ["Added label _admin to host np0005548786.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548787.localdomain", "mon"], "delta": "0:00:00.825441", "end": "2025-12-06 10:03:45.772276", "item": ["np0005548787.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:44.946835", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005548787.localdomain", "stdout_lines": ["Added label mon to host np0005548787.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548787.localdomain", "_admin"], "delta": "0:00:00.733462", "end": "2025-12-06 10:03:47.124340", "item": ["np0005548787.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:46.390878", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005548787.localdomain", "stdout_lines": ["Added label _admin to host np0005548787.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548788.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548788.localdomain", "mon"], "delta": "0:00:00.832119", "end": "2025-12-06 10:03:48.512838", "item": ["np0005548788.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:47.680719", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005548788.localdomain", "stdout_lines": ["Added label mon to host np0005548788.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548788.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548788.localdomain", "_admin"], "delta": "0:00:00.714895", "end": "2025-12-06 10:03:49.782657", "item": ["np0005548788.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:49.067762", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005548788.localdomain", "stdout_lines": ["Added label _admin to host np0005548788.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548789.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548789.localdomain", "mon"], "delta": "0:00:00.673884", "end": "2025-12-06 10:03:51.036843", "item": ["np0005548789.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:50.362959", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005548789.localdomain", "stdout_lines": ["Added label mon to host np0005548789.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548789.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548789.localdomain", "_admin"], "delta": "0:00:00.842701", "end": "2025-12-06 10:03:52.433956", "item": ["np0005548789.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:51.591255", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005548789.localdomain", "stdout_lines": ["Added label _admin to host np0005548789.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548790.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548790.localdomain", "mon"], "delta": "0:00:00.799340", "end": "2025-12-06 10:03:53.795689", "item": ["np0005548790.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:52.996349", "stderr": "", "stderr_lines": [], "stdout": "Added label mon to host np0005548790.localdomain", "stdout_lines": ["Added label mon to host np0005548790.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548790.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "add", "np0005548790.localdomain", "_admin"], "delta": "0:00:00.715636", "end": "2025-12-06 10:03:55.144352", "item": ["np0005548790.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:03:54.428716", "stderr": "", "stderr_lines": [], "stdout": "Added label _admin to host np0005548790.localdomain", "stdout_lines": ["Added label _admin to host np0005548790.localdomain"]} TASK [ceph_migrate : Normalize the mon spec to use labels] ********************* ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/etc/ceph:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.792179", "end": "2025-12-06 10:03:56.597095", "rc": 0, "start": "2025-12-06 10:03:55.804916", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : RBD - wait new daemons to be available] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain => (item=np0005548788.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain => (item=np0005548789.localdomain) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain => (item=np0005548790.localdomain) TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* FAILED - RETRYING: [np0005548785.localdomain]: wait for mon (200 retries left). FAILED - RETRYING: [np0005548785.localdomain]: wait for mon (199 retries left). FAILED - RETRYING: [np0005548785.localdomain]: wait for mon (198 retries left). changed: [np0005548785.localdomain] => {"attempts": 4, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548788", "-f", "json"], "delta": "0:00:00.770841", "end": "2025-12-06 10:04:20.911708", "msg": "", "rc": 0, "start": "2025-12-06 10:04:20.140867", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"31023b3d2b24\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.52%\", \"created\": \"2025-12-06T10:04:09.873156Z\", \"daemon_id\": \"np0005548788\", \"daemon_name\": \"mon.np0005548788\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:14.053235Z daemon:mon.np0005548788 [INFO] \\\"Deployed mon.np0005548788 on host 'np0005548788.localdomain'\\\"\"], \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:16.494869Z\", \"memory_request\": 2147483648, \"memory_usage\": 39856373, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:09.763445Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"31023b3d2b24\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.52%\", \"created\": \"2025-12-06T10:04:09.873156Z\", \"daemon_id\": \"np0005548788\", \"daemon_name\": \"mon.np0005548788\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:14.053235Z daemon:mon.np0005548788 [INFO] \\\"Deployed mon.np0005548788 on host 'np0005548788.localdomain'\\\"\"], \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:16.494869Z\", \"memory_request\": 2147483648, \"memory_usage\": 39856373, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:09.763445Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548789", "-f", "json"], "delta": "0:00:00.617484", "end": "2025-12-06 10:04:22.132663", "msg": "", "rc": 0, "start": "2025-12-06 10:04:21.515179", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"8db79eb988f6\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.73%\", \"created\": \"2025-12-06T10:04:04.570095Z\", \"daemon_id\": \"np0005548789\", \"daemon_name\": \"mon.np0005548789\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:07.290529Z daemon:mon.np0005548789 [INFO] \\\"Deployed mon.np0005548789 on host 'np0005548789.localdomain'\\\"\"], \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:16.102520Z\", \"memory_request\": 2147483648, \"memory_usage\": 39845888, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:04.444117Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"8db79eb988f6\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.73%\", \"created\": \"2025-12-06T10:04:04.570095Z\", \"daemon_id\": \"np0005548789\", \"daemon_name\": \"mon.np0005548789\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:07.290529Z daemon:mon.np0005548789 [INFO] \\\"Deployed mon.np0005548789 on host 'np0005548789.localdomain'\\\"\"], \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:16.102520Z\", \"memory_request\": 2147483648, \"memory_usage\": 39845888, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:04.444117Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/etc/ceph:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548790", "-f", "json"], "delta": "0:00:00.714676", "end": "2025-12-06 10:04:23.543740", "msg": "", "rc": 0, "start": "2025-12-06 10:04:22.829064", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"c2f79d602097\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.51%\", \"created\": \"2025-12-06T10:04:01.810794Z\", \"daemon_id\": \"np0005548790\", \"daemon_name\": \"mon.np0005548790\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:01.903109Z daemon:mon.np0005548790 [INFO] \\\"Deployed mon.np0005548790 on host 'np0005548790.localdomain'\\\"\"], \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:16.373365Z\", \"memory_request\": 2147483648, \"memory_usage\": 42100326, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:01.714811Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"c2f79d602097\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.51%\", \"created\": \"2025-12-06T10:04:01.810794Z\", \"daemon_id\": \"np0005548790\", \"daemon_name\": \"mon.np0005548790\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:01.903109Z daemon:mon.np0005548790 [INFO] \\\"Deployed mon.np0005548790 on host 'np0005548790.localdomain'\\\"\"], \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:16.373365Z\", \"memory_request\": 2147483648, \"memory_usage\": 42100326, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:01.714811Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : MON - Migrate RBD node] *********************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005548785.localdomain => (item=['np0005548785.localdomain', 'np0005548788.localdomain']) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005548785.localdomain => (item=['np0005548786.localdomain', 'np0005548789.localdomain']) included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/mon.yaml for np0005548785.localdomain => (item=['np0005548787.localdomain', 'np0005548790.localdomain']) TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005548785.localdomain] => { "msg": "Migrate mon: np0005548785.localdomain to node: np0005548788.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005548785.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.103"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.772342", "end": "2025-12-06 10:04:25.378983", "msg": "", "rc": 0, "start": "2025-12-06 10:04:24.606641", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":26,\"quorum\":[0,1,2,3,4,5],\"quorum_names\":[\"np0005548785\",\"np0005548787\",\"np0005548786\",\"np0005548790\",\"np0005548789\",\"np0005548788\"],\"quorum_age\":5,\"monmap\":{\"epoch\":6,\"min_mon_release_name\":\"reef\",\"num_mons\":6},\"osdmap\":{\"epoch\":87,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":593137664,\"bytes_avail\":44478853120,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":71,\"modified\":\"2025-12-06T10:04:02.591091+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005548788.yvwbqq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005548789.mzhmje\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005548790.kvkfyr\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":26,\"quorum\":[0,1,2,3,4,5],\"quorum_names\":[\"np0005548785\",\"np0005548787\",\"np0005548786\",\"np0005548790\",\"np0005548789\",\"np0005548788\"],\"quorum_age\":5,\"monmap\":{\"epoch\":6,\"min_mon_release_name\":\"reef\",\"num_mons\":6},\"osdmap\":{\"epoch\":87,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109571242,\"bytes_used\":593137664,\"bytes_avail\":44478853120,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":71,\"modified\":\"2025-12-06T10:04:02.591091+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005548788.yvwbqq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005548789.mzhmje\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}},\"np0005548790.kvkfyr\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "cur_mon != client_node", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.748859", "end": "2025-12-06 10:04:26.796354", "msg": "", "rc": 0, "start": "2025-12-06 10:04:26.047495", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":13,\"available\":true,\"active_name\":\"np0005548785.vhqlsq\",\"num_standby\":5}", "stdout_lines": ["", "{\"epoch\":13,\"available\":true,\"active_name\":\"np0005548785.vhqlsq\",\"num_standby\":5}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005548785.vhqlsq", "available": true, "epoch": 13, "num_standby": 5}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005548785.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.882269", "end": "2025-12-06 10:04:28.535321", "msg": "", "rc": 0, "start": "2025-12-06 10:04:27.653052", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:04:28.647131", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:04:38.659371", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005548785.localdomain: jid=j322159543929.480820 changed: [np0005548785.localdomain] => {"ansible_job_id": "j322159543929.480820", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.633153", "end": "2025-12-06 10:04:40.225752", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j322159543929.480820", "start": "2025-12-06 10:04:39.592599", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005548785.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548785", "-f", "json"], "delta": "0:00:00.698279", "end": "2025-12-06 10:04:42.911610", "msg": "", "rc": 0, "start": "2025-12-06 10:04:42.213331", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"0b05145abf99\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.70%\", \"created\": \"2025-12-06T07:57:16.479799Z\", \"daemon_id\": \"np0005548785\", \"daemon_name\": \"mon.np0005548785\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548785.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:31.135016Z\", \"memory_request\": 2147483648, \"memory_usage\": 143759769, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T07:57:19.510726Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"0b05145abf99\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.70%\", \"created\": \"2025-12-06T07:57:16.479799Z\", \"daemon_id\": \"np0005548785\", \"daemon_name\": \"mon.np0005548785\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548785.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:31.135016Z\", \"memory_request\": 2147483648, \"memory_usage\": 143759769, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T07:57:19.510726Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005548785", "--force"], "delta": "0:00:02.468646", "end": "2025-12-06 10:04:46.069473", "msg": "", "rc": 0, "start": "2025-12-06 10:04:43.600827", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005548785 from host 'np0005548785.localdomain'", "stdout_lines": ["Removed mon.np0005548785 from host 'np0005548785.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005548785.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548785.localdomain", "mon"], "delta": "0:00:02.883660", "end": "2025-12-06 10:04:49.712627", "item": ["np0005548785.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:04:46.828967", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005548785.localdomain", "stdout_lines": ["Removed label mon from host np0005548785.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548785.localdomain", "mgr"], "delta": "0:00:00.792562", "end": "2025-12-06 10:04:51.040713", "item": ["np0005548785.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2025-12-06 10:04:50.248151", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005548785.localdomain", "stdout_lines": ["Removed label mgr from host np0005548785.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548785.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548785.localdomain", "_admin"], "delta": "0:00:00.725873", "end": "2025-12-06 10:04:52.339525", "item": ["np0005548785.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:04:51.613652", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005548785.localdomain", "stdout_lines": ["Removed label _admin from host np0005548785.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:04:52.469305", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:05:02.479576", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005548785.localdomain"], "delta": "0:00:00.736059", "end": "2025-12-06 10:05:03.761165", "msg": "", "rc": 0, "start": "2025-12-06 10:05:03.025106", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005548785.localdomain'\ntype id \n-------------------- ---------------\nmgr np0005548785.vhqlsq\ncrash np0005548785 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005548785.localdomain'", "type id ", "-------------------- ---------------", "mgr np0005548785.vhqlsq", "crash np0005548785 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005548785.localdomain", "-f", "json"], "delta": "0:00:00.729790", "end": "2025-12-06 10:05:05.172211", "msg": "", "rc": 0, "start": "2025-12-06 10:05:04.442421", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005548785.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.103\", \"hostname\": \"np0005548785.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005548785.localdomain", "--force"], "delta": "0:00:00.734366", "end": "2025-12-06 10:05:06.534947", "msg": "", "rc": 0, "start": "2025-12-06 10:05:05.800581", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005548785.localdomain'", "stdout_lines": ["Removed host 'np0005548785.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005548785.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005548785.localdomain] => {"backup": "/etc/os-net-config/tripleo_config.yaml.483070.2025-12-06@10:05:07~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.103/24", "dev", "vlan21"], "delta": "0:00:00.004787", "end": "2025-12-06 10:05:07.877107", "msg": "", "rc": 0, "start": "2025-12-06 10:05:07.872320", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005548785.localdomain -> np0005548788.localdomain(192.168.122.106)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.291724.2025-12-06@10:05:09~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005548785.localdomain -> np0005548788.localdomain(192.168.122.106)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.103/24", "dev", "vlan21"], "delta": "0:00:00.004404", "end": "2025-12-06 10:05:10.010167", "msg": "", "rc": 0, "start": "2025-12-06 10:05:10.005763", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005548785.localdomain -> np0005548788.localdomain(192.168.122.106)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.103"], "delta": "0:00:02.091969", "end": "2025-12-06 10:05:12.910924", "msg": "", "rc": 0, "start": "2025-12-06 10:05:10.818955", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.103 (172.18.0.103) 56(84) bytes of data.\n64 bytes from 172.18.0.103: icmp_seq=1 ttl=64 time=0.075 ms\n64 bytes from 172.18.0.103: icmp_seq=2 ttl=64 time=0.050 ms\n64 bytes from 172.18.0.103: icmp_seq=3 ttl=64 time=0.046 ms\n\n--- 172.18.0.103 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2085ms\nrtt min/avg/max/mdev = 0.046/0.057/0.075/0.012 ms", "stdout_lines": ["PING 172.18.0.103 (172.18.0.103) 56(84) bytes of data.", "64 bytes from 172.18.0.103: icmp_seq=1 ttl=64 time=0.075 ms", "64 bytes from 172.18.0.103: icmp_seq=2 ttl=64 time=0.050 ms", "64 bytes from 172.18.0.103: icmp_seq=3 ttl=64 time=0.046 ms", "", "--- 172.18.0.103 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2085ms", "rtt min/avg/max/mdev = 0.046/0.057/0.075/0.012 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.760197", "end": "2025-12-06 10:05:14.373335", "rc": 0, "start": "2025-12-06 10:05:13.613138", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548788", "-f", "json"], "delta": "0:00:00.741458", "end": "2025-12-06 10:05:15.789299", "msg": "", "rc": 0, "start": "2025-12-06 10:05:15.047841", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"31023b3d2b24\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.20%\", \"created\": \"2025-12-06T10:04:09.873156Z\", \"daemon_id\": \"np0005548788\", \"daemon_name\": \"mon.np0005548788\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:50.206849Z daemon:mon.np0005548788 [INFO] \\\"Reconfigured mon.np0005548788 on host 'np0005548788.localdomain'\\\"\"], \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:30.856923Z\", \"memory_request\": 2147483648, \"memory_usage\": 47909437, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:09.763445Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"31023b3d2b24\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.20%\", \"created\": \"2025-12-06T10:04:09.873156Z\", \"daemon_id\": \"np0005548788\", \"daemon_name\": \"mon.np0005548788\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:04:50.206849Z daemon:mon.np0005548788 [INFO] \\\"Reconfigured mon.np0005548788 on host 'np0005548788.localdomain'\\\"\"], \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:04:30.856923Z\", \"memory_request\": 2147483648, \"memory_usage\": 47909437, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:09.763445Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005548788", "--force"], "delta": "0:00:05.911854", "end": "2025-12-06 10:05:22.385538", "msg": "", "rc": 0, "start": "2025-12-06 10:05:16.473684", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005548788 from host 'np0005548788.localdomain'", "stdout_lines": ["Removed mon.np0005548788 from host 'np0005548788.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:05:22.525883", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:05:32.536807", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005548788.localdomain] *********** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005548788.localdomain] *********** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005548788.localdomain:172.18.0.103"], "delta": "0:00:03.704646", "end": "2025-12-06 10:05:36.805471", "msg": "", "rc": 0, "start": "2025-12-06 10:05:33.100825", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005548788 on host 'np0005548788.localdomain'", "stdout_lines": ["Deployed mon.np0005548788 on host 'np0005548788.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:05:36.951617", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:05:46.964043", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.714781", "end": "2025-12-06 10:05:48.177179", "msg": "", "rc": 0, "start": "2025-12-06 10:05:47.462398", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":44,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005548787\",\"np0005548786\",\"np0005548790\",\"np0005548789\",\"np0005548788\"],\"quorum_age\":5,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":88,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109572084,\"bytes_used\":593203200,\"bytes_avail\":44478787584,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":74,\"modified\":\"2025-12-06T10:05:30.476820+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005548785.vhqlsq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":44,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005548787\",\"np0005548786\",\"np0005548790\",\"np0005548789\",\"np0005548788\"],\"quorum_age\":5,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":88,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109572084,\"bytes_used\":593203200,\"bytes_avail\":44478787584,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":74,\"modified\":\"2025-12-06T10:05:30.476820+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005548785.vhqlsq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:01.105711", "end": "2025-12-06 10:05:49.918542", "msg": "", "rc": 0, "start": "2025-12-06 10:05:48.812831", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005548788.localdomain'\nScheduled to reconfig osd.5 on host 'np0005548788.localdomain'\nScheduled to reconfig osd.1 on host 'np0005548789.localdomain'\nScheduled to reconfig osd.4 on host 'np0005548789.localdomain'\nScheduled to reconfig osd.0 on host 'np0005548790.localdomain'\nScheduled to reconfig osd.3 on host 'np0005548790.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005548788.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005548788.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005548789.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005548789.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005548790.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005548790.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005548785.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.765074", "end": "2025-12-06 10:05:51.493140", "msg": "", "rc": 0, "start": "2025-12-06 10:05:50.728066", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:05:51.601115", "stderr": "", "stdout": "Paused for 10.02 seconds", "stop": "2025-12-06 10:06:01.625033", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005548785.localdomain: jid=j459144046997.484733 changed: [np0005548785.localdomain] => {"ansible_job_id": "j459144046997.484733", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.648304", "end": "2025-12-06 10:06:02.997100", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j459144046997.484733", "start": "2025-12-06 10:06:02.348796", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.744229", "end": "2025-12-06 10:06:05.628020", "rc": 0, "start": "2025-12-06 10:06:04.883791", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548788", "-f", "json"], "delta": "0:00:00.762394", "end": "2025-12-06 10:06:07.107912", "msg": "", "rc": 0, "start": "2025-12-06 10:06:06.345518", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"314d98beac8c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"4.24%\", \"created\": \"2025-12-06T10:05:36.540786Z\", \"daemon_id\": \"np0005548788\", \"daemon_name\": \"mon.np0005548788\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:05:53.711118Z\", \"memory_request\": 2147483648, \"memory_usage\": 54798581, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:05:36.446124Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"314d98beac8c\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"4.24%\", \"created\": \"2025-12-06T10:05:36.540786Z\", \"daemon_id\": \"np0005548788\", \"daemon_name\": \"mon.np0005548788\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548788.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:05:53.711118Z\", \"memory_request\": 2147483648, \"memory_usage\": 54798581, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:05:36.446124Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005548785.localdomain] => { "msg": "Migrate mon: np0005548786.localdomain to node: np0005548789.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005548785.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.104"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.786264", "end": "2025-12-06 10:06:08.734541", "msg": "", "rc": 0, "start": "2025-12-06 10:06:07.948277", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":44,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005548787\",\"np0005548786\",\"np0005548790\",\"np0005548789\",\"np0005548788\"],\"quorum_age\":26,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":593285120,\"bytes_avail\":44478705664,\"bytes_total\":45071990784,\"read_bytes_sec\":19543,\"write_bytes_sec\":0,\"read_op_per_sec\":8,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":74,\"modified\":\"2025-12-06T10:05:30.476820+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005548785.vhqlsq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":44,\"quorum\":[0,1,2,3,4],\"quorum_names\":[\"np0005548787\",\"np0005548786\",\"np0005548790\",\"np0005548789\",\"np0005548788\"],\"quorum_age\":26,\"monmap\":{\"epoch\":9,\"min_mon_release_name\":\"reef\",\"num_mons\":5},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":593285120,\"bytes_avail\":44478705664,\"bytes_total\":45071990784,\"read_bytes_sec\":19543,\"write_bytes_sec\":0,\"read_op_per_sec\":8,\"write_op_per_sec\":1},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":74,\"modified\":\"2025-12-06T10:05:30.476820+0000\",\"services\":{\"mgr\":{\"daemons\":{\"summary\":\"\",\"np0005548785.vhqlsq\":{\"start_epoch\":0,\"start_stamp\":\"0.000000\",\"gid\":0,\"addr\":\"(unrecognized address family 0)/0\",\"metadata\":{},\"task_status\":{}}}}}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005548785.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => {"changed": false, "examined": 2, "files": [{"atime": 1765015555.5384877, "ctime": 1765015556.0095022, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1241514855, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765015555.7454941, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 367, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765015556.9595313, "ctime": 1765015557.4215455, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1241514856, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1765015557.1725378, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 2, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 367, 'inode': 1241514855, 'dev': 64516, 'nlink': 1, 'atime': 1765015555.5384877, 'mtime': 1765015555.7454941, 'ctime': 1765015556.0095022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "2520a1d6f988ac3ce9edcb55b6b2e2b12075a514", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1765015555.5384877, "ctime": 1765015556.0095022, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1241514855, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765015555.7454941, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 367, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "e6e2c83b505aed402672839bd3e77286", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 367, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 1241514856, 'dev': 64516, 'nlink': 1, 'atime': 1765015556.9595313, 'mtime': 1765015557.1725378, 'ctime': 1765015557.4215455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "904318df38203f10f7fb10e4cc9586ba0770617d", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1765015556.9595313, "ctime": 1765015557.4215455, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1241514856, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1765015557.1725378, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "7352b81dd3becac122410b306a619c64", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.689846", "end": "2025-12-06 10:06:13.393242", "msg": "", "rc": 0, "start": "2025-12-06 10:06:12.703396", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":23,\"available\":true,\"active_name\":\"np0005548787.umwsra\",\"num_standby\":5}", "stdout_lines": ["", "{\"epoch\":23,\"available\":true,\"active_name\":\"np0005548787.umwsra\",\"num_standby\":5}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005548787.umwsra", "available": true, "epoch": 23, "num_standby": 5}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "mgr.active_name | regex_search(cur_mon | split('.') | first) or mgr.active_name | regex_search(target_node | split('.') | first)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005548785.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548786", "-f", "json"], "delta": "0:00:00.682880", "end": "2025-12-06 10:06:14.873349", "msg": "", "rc": 0, "start": "2025-12-06 10:06:14.190469", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"5628f21fb189\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.17%\", \"created\": \"2025-12-06T07:59:29.688768Z\", \"daemon_id\": \"np0005548786\", \"daemon_name\": \"mon.np0005548786\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:06:12.220632Z daemon:mon.np0005548786 [INFO] \\\"Reconfigured mon.np0005548786 on host 'np0005548786.localdomain'\\\"\"], \"hostname\": \"np0005548786.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:05:53.631751Z\", \"memory_request\": 2147483648, \"memory_usage\": 136944025, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T07:59:29.550380Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"5628f21fb189\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.17%\", \"created\": \"2025-12-06T07:59:29.688768Z\", \"daemon_id\": \"np0005548786\", \"daemon_name\": \"mon.np0005548786\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:06:12.220632Z daemon:mon.np0005548786 [INFO] \\\"Reconfigured mon.np0005548786 on host 'np0005548786.localdomain'\\\"\"], \"hostname\": \"np0005548786.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:05:53.631751Z\", \"memory_request\": 2147483648, \"memory_usage\": 136944025, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T07:59:29.550380Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005548786", "--force"], "delta": "0:00:02.269664", "end": "2025-12-06 10:06:17.890167", "msg": "", "rc": 0, "start": "2025-12-06 10:06:15.620503", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005548786 from host 'np0005548786.localdomain'", "stdout_lines": ["Removed mon.np0005548786 from host 'np0005548786.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005548785.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548786.localdomain", "mon"], "delta": "0:00:00.696094", "end": "2025-12-06 10:06:19.305172", "item": ["np0005548786.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:06:18.609078", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005548786.localdomain", "stdout_lines": ["Removed label mon from host np0005548786.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548786.localdomain", "mgr"], "delta": "0:00:00.743585", "end": "2025-12-06 10:06:20.560396", "item": ["np0005548786.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2025-12-06 10:06:19.816811", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005548786.localdomain", "stdout_lines": ["Removed label mgr from host np0005548786.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548786.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548786.localdomain", "_admin"], "delta": "0:00:00.700506", "end": "2025-12-06 10:06:22.539794", "item": ["np0005548786.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:06:21.839288", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005548786.localdomain", "stdout_lines": ["Removed label _admin from host np0005548786.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:06:22.671326", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:06:32.683550", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005548786.localdomain"], "delta": "0:00:00.764271", "end": "2025-12-06 10:06:34.129607", "msg": "", "rc": 0, "start": "2025-12-06 10:06:33.365336", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005548786.localdomain'\ntype id \n-------------------- ---------------\ncrash np0005548786 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005548786.localdomain'", "type id ", "-------------------- ---------------", "crash np0005548786 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005548786.localdomain", "-f", "json"], "delta": "0:00:00.730589", "end": "2025-12-06 10:06:35.546323", "msg": "", "rc": 0, "start": "2025-12-06 10:06:34.815734", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.104\", \"hostname\": \"np0005548786.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.104\", \"hostname\": \"np0005548786.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005548786.localdomain", "--force"], "delta": "0:00:00.797147", "end": "2025-12-06 10:06:36.925965", "msg": "", "rc": 0, "start": "2025-12-06 10:06:36.128818", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005548786.localdomain'", "stdout_lines": ["Removed host 'np0005548786.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005548785.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.469435.2025-12-06@10:06:37~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005548785.localdomain -> np0005548786.localdomain(192.168.122.104)] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.104/24", "dev", "vlan21"], "delta": "0:00:00.004673", "end": "2025-12-06 10:06:38.542211", "msg": "", "rc": 0, "start": "2025-12-06 10:06:38.537538", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005548785.localdomain -> np0005548789.localdomain(192.168.122.107)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.296949.2025-12-06@10:06:39~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005548785.localdomain -> np0005548789.localdomain(192.168.122.107)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.104/24", "dev", "vlan21"], "delta": "0:00:00.004764", "end": "2025-12-06 10:06:40.768148", "msg": "", "rc": 0, "start": "2025-12-06 10:06:40.763384", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005548785.localdomain -> np0005548789.localdomain(192.168.122.107)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.104"], "delta": "0:00:02.041531", "end": "2025-12-06 10:06:43.548494", "msg": "", "rc": 0, "start": "2025-12-06 10:06:41.506963", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.104 (172.18.0.104) 56(84) bytes of data.\n64 bytes from 172.18.0.104: icmp_seq=1 ttl=64 time=0.065 ms\n64 bytes from 172.18.0.104: icmp_seq=2 ttl=64 time=0.071 ms\n64 bytes from 172.18.0.104: icmp_seq=3 ttl=64 time=0.086 ms\n\n--- 172.18.0.104 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2036ms\nrtt min/avg/max/mdev = 0.065/0.074/0.086/0.008 ms", "stdout_lines": ["PING 172.18.0.104 (172.18.0.104) 56(84) bytes of data.", "64 bytes from 172.18.0.104: icmp_seq=1 ttl=64 time=0.065 ms", "64 bytes from 172.18.0.104: icmp_seq=2 ttl=64 time=0.071 ms", "64 bytes from 172.18.0.104: icmp_seq=3 ttl=64 time=0.086 ms", "", "--- 172.18.0.104 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2036ms", "rtt min/avg/max/mdev = 0.065/0.074/0.086/0.008 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.708389", "end": "2025-12-06 10:06:44.941117", "rc": 0, "start": "2025-12-06 10:06:44.232728", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548789", "-f", "json"], "delta": "0:00:00.661407", "end": "2025-12-06 10:06:46.281937", "msg": "", "rc": 0, "start": "2025-12-06 10:06:45.620530", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"8db79eb988f6\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.13%\", \"created\": \"2025-12-06T10:04:04.570095Z\", \"daemon_id\": \"np0005548789\", \"daemon_name\": \"mon.np0005548789\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:06:40.893799Z daemon:mon.np0005548789 [INFO] \\\"Reconfigured mon.np0005548789 on host 'np0005548789.localdomain'\\\"\"], \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:05:53.568961Z\", \"memory_request\": 2147483648, \"memory_usage\": 46053457, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:04.444117Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"8db79eb988f6\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.13%\", \"created\": \"2025-12-06T10:04:04.570095Z\", \"daemon_id\": \"np0005548789\", \"daemon_name\": \"mon.np0005548789\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:06:40.893799Z daemon:mon.np0005548789 [INFO] \\\"Reconfigured mon.np0005548789 on host 'np0005548789.localdomain'\\\"\"], \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:05:53.568961Z\", \"memory_request\": 2147483648, \"memory_usage\": 46053457, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:04.444117Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005548789", "--force"], "delta": "0:00:02.222054", "end": "2025-12-06 10:06:49.169921", "msg": "", "rc": 0, "start": "2025-12-06 10:06:46.947867", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005548789 from host 'np0005548789.localdomain'", "stdout_lines": ["Removed mon.np0005548789 from host 'np0005548789.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:06:49.324966", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:06:59.331805", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005548789.localdomain] *********** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005548789.localdomain] *********** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005548789.localdomain:172.18.0.104"], "delta": "0:00:03.473757", "end": "2025-12-06 10:07:03.353136", "msg": "", "rc": 0, "start": "2025-12-06 10:06:59.879379", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005548789 on host 'np0005548789.localdomain'", "stdout_lines": ["Deployed mon.np0005548789 on host 'np0005548789.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:07:03.484899", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:07:13.495939", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.769794", "end": "2025-12-06 10:07:14.767887", "msg": "", "rc": 0, "start": "2025-12-06 10:07:13.998093", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":52,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548787\",\"np0005548790\",\"np0005548788\"],\"quorum_age\":22,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":593285120,\"bytes_avail\":44478705664,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":78,\"modified\":\"2025-12-06T10:06:58.184349+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":52,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548787\",\"np0005548790\",\"np0005548788\"],\"quorum_age\":22,\"monmap\":{\"epoch\":11,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":89,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109573208,\"bytes_used\":593285120,\"bytes_avail\":44478705664,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":5,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":78,\"modified\":\"2025-12-06T10:06:58.184349+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:00.911395", "end": "2025-12-06 10:07:16.384760", "msg": "", "rc": 0, "start": "2025-12-06 10:07:15.473365", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005548788.localdomain'\nScheduled to reconfig osd.5 on host 'np0005548788.localdomain'\nScheduled to reconfig osd.1 on host 'np0005548789.localdomain'\nScheduled to reconfig osd.4 on host 'np0005548789.localdomain'\nScheduled to reconfig osd.0 on host 'np0005548790.localdomain'\nScheduled to reconfig osd.3 on host 'np0005548790.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005548788.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005548788.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005548789.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005548789.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005548790.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005548790.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005548785.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.800293", "end": "2025-12-06 10:07:18.000339", "msg": "", "rc": 0, "start": "2025-12-06 10:07:17.200046", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:07:18.126341", "stderr": "", "stdout": "Paused for 10.02 seconds", "stop": "2025-12-06 10:07:28.144288", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j72361394830.488182 started=1 finished=0 ASYNC FAILED on np0005548785.localdomain: jid=j72361394830.488182 ok: [np0005548785.localdomain] => {"ansible_job_id": "j72361394830.488182", "changed": false, "child_pid": 488186, "failed_when_result": false, "finished": 1, "msg": "Timeout exceeded", "results_file": "/root/.ansible_async/j72361394830.488182", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Restart the active mgr] *********************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.843444", "end": "2025-12-06 10:08:06.568351", "msg": "", "rc": 0, "start": "2025-12-06 10:08:05.724907", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** ASYNC OK on np0005548785.localdomain: jid=j443348624584.490284 changed: [np0005548785.localdomain] => {"ansible_job_id": "j443348624584.490284", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.791473", "end": "2025-12-06 10:08:08.186107", "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j443348624584.490284", "start": "2025-12-06 10:08:07.394634", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.868380", "end": "2025-12-06 10:08:10.830191", "rc": 0, "start": "2025-12-06 10:08:09.961811", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548789", "-f", "json"], "delta": "0:00:00.683561", "end": "2025-12-06 10:08:12.218525", "msg": "", "rc": 0, "start": "2025-12-06 10:08:11.534964", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"fc31a9b04a3a\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.80%\", \"created\": \"2025-12-06T10:07:03.095554Z\", \"daemon_id\": \"np0005548789\", \"daemon_name\": \"mon.np0005548789\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:08:08.999755Z\", \"memory_request\": 2147483648, \"memory_usage\": 44952453, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:07:02.977946Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"fc31a9b04a3a\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"0.80%\", \"created\": \"2025-12-06T10:07:03.095554Z\", \"daemon_id\": \"np0005548789\", \"daemon_name\": \"mon.np0005548789\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548789.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:08:08.999755Z\", \"memory_request\": 2147483648, \"memory_usage\": 44952453, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:07:02.977946Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Migrate mon] ********************************************** ok: [np0005548785.localdomain] => { "msg": "Migrate mon: np0005548787.localdomain to node: np0005548790.localdomain" } TASK [ceph_migrate : MON - Get current mon IP address from node_map override] *** ok: [np0005548785.localdomain] => {"ansible_facts": {"mon_ipaddr": "172.18.0.105"}, "changed": false} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.790086", "end": "2025-12-06 10:08:13.945240", "msg": "", "rc": 0, "start": "2025-12-06 10:08:13.155154", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":56,\"quorum\":[0,1,2,3],\"quorum_names\":[\"np0005548787\",\"np0005548790\",\"np0005548788\",\"np0005548789\"],\"quorum_age\":25,\"monmap\":{\"epoch\":12,\"min_mon_release_name\":\"reef\",\"num_mons\":4},\"osdmap\":{\"epoch\":91,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109574332,\"bytes_used\":593444864,\"bytes_avail\":44478545920,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":78,\"modified\":\"2025-12-06T10:06:58.184349+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_OK\",\"checks\":{},\"mutes\":[]},\"election_epoch\":56,\"quorum\":[0,1,2,3],\"quorum_names\":[\"np0005548787\",\"np0005548790\",\"np0005548788\",\"np0005548789\"],\"quorum_age\":25,\"monmap\":{\"epoch\":12,\"min_mon_release_name\":\"reef\",\"num_mons\":4},\"osdmap\":{\"epoch\":91,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109574332,\"bytes_used\":593444864,\"bytes_avail\":44478545920,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":78,\"modified\":\"2025-12-06T10:06:58.184349+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Backup data for client purposes] ************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/backup.yaml for np0005548785.localdomain TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => {"changed": false, "examined": 2, "files": [{"atime": 1765015690.788136, "ctime": 1765015691.2101471, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1258329961, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765015690.984141, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 319, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765015692.1601722, "ctime": 1765015692.576183, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1258291544, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1765015692.4001784, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 2, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Backup ceph client data] ********************************** changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => (item={'path': '/etc/ceph/ceph.conf', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 319, 'inode': 1258329961, 'dev': 64516, 'nlink': 1, 'atime': 1765015690.788136, 'mtime': 1765015690.984141, 'ctime': 1765015691.2101471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "0a25b4db2f33f6cae6351b919ee234a8727d7f28", "dest": "/home/tripleo-admin/ceph_client/ceph.conf", "gid": 0, "group": "root", "item": {"atime": 1765015690.788136, "ctime": 1765015691.2101471, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1258329961, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765015690.984141, "nlink": 1, "path": "/etc/ceph/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 319, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "798735c2b6c7b8266cd507d7f3338b3e", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 319, "src": "/etc/ceph/ceph.conf", "state": "file", "uid": 0} changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => (item={'path': '/etc/ceph/ceph.client.admin.keyring', 'mode': '0600', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 151, 'inode': 1258291544, 'dev': 64516, 'nlink': 1, 'atime': 1765015692.1601722, 'mtime': 1765015692.4001784, 'ctime': 1765015692.576183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': False, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_loop_var": "item", "changed": true, "checksum": "904318df38203f10f7fb10e4cc9586ba0770617d", "dest": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": {"atime": 1765015692.1601722, "ctime": 1765015692.576183, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 1258291544, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1765015692.4001784, "nlink": 1, "path": "/etc/ceph/ceph.client.admin.keyring", "pw_name": "root", "rgrp": false, "roth": false, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "md5sum": "7352b81dd3becac122410b306a619c64", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 151, "src": "/etc/ceph/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : MON - Get the current active mgr] ************************* changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "stat", "-f", "json"], "delta": "0:00:00.782230", "end": "2025-12-06 10:08:18.547204", "msg": "", "rc": 0, "start": "2025-12-06 10:08:17.764974", "stderr": "", "stderr_lines": [], "stdout": "\n{\"epoch\":28,\"available\":true,\"active_name\":\"np0005548790.kvkfyr\",\"num_standby\":4}", "stdout_lines": ["", "{\"epoch\":28,\"available\":true,\"active_name\":\"np0005548790.kvkfyr\",\"num_standby\":4}"]} TASK [ceph_migrate : MON - Load mgr data] ************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mgr": {"active_name": "np0005548790.kvkfyr", "available": true, "epoch": 28, "num_standby": 4}}, "changed": false} TASK [ceph_migrate : Print active mgr] ***************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Fail mgr if active in the current node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005548785.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.770477", "end": "2025-12-06 10:08:20.605939", "msg": "", "rc": 0, "start": "2025-12-06 10:08:19.835462", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:08:20.725969", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:08:30.738648", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC OK on np0005548785.localdomain: jid=j560919075371.491528 changed: [np0005548785.localdomain] => {"ansible_job_id": "j560919075371.491528", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.729038", "end": "2025-12-06 10:08:32.206236", "failed_when_result": false, "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j560919075371.491528", "start": "2025-12-06 10:08:31.477198", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : Restart the active mgr] *********************************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "\"Timeout exceeded\" in ceph_orch_status.msg", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Drain and rm --force the cur_mon host] ************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/drain.yaml for np0005548785.localdomain TASK [ceph_migrate : Get ceph_cli] ********************************************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : MON - wait daemons] *************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548787", "-f", "json"], "delta": "0:00:00.820644", "end": "2025-12-06 10:08:34.840288", "msg": "", "rc": 0, "start": "2025-12-06 10:08:34.019644", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"92a09430a852\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.27%\", \"created\": \"2025-12-06T07:59:27.172547Z\", \"daemon_id\": \"np0005548787\", \"daemon_name\": \"mon.np0005548787\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548787.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:08:22.451975Z\", \"memory_request\": 2147483648, \"memory_usage\": 169659596, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T07:59:27.045718Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"92a09430a852\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"1.27%\", \"created\": \"2025-12-06T07:59:27.172547Z\", \"daemon_id\": \"np0005548787\", \"daemon_name\": \"mon.np0005548787\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548787.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:08:22.451975Z\", \"memory_request\": 2147483648, \"memory_usage\": 169659596, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T07:59:27.045718Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : DRAIN - Delete the mon running on the current controller node] *** changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005548787", "--force"], "delta": "0:00:05.866667", "end": "2025-12-06 10:08:41.570384", "msg": "", "rc": 0, "start": "2025-12-06 10:08:35.703717", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005548787 from host 'np0005548787.localdomain'", "stdout_lines": ["Removed mon.np0005548787 from host 'np0005548787.localdomain'"]} TASK [ceph_migrate : DRAIN - remove label from the src node] ******************* included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/labels.yaml for np0005548785.localdomain TASK [ceph_migrate : Set/Unset labels - rm] ************************************ skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Print nodes] ********************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Set/Unset labels - rm] ************************************ changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', 'mon']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548787.localdomain", "mon"], "delta": "0:00:00.703362", "end": "2025-12-06 10:08:43.075174", "item": ["np0005548787.localdomain", "mon"], "msg": "", "rc": 0, "start": "2025-12-06 10:08:42.371812", "stderr": "", "stderr_lines": [], "stdout": "Removed label mon from host np0005548787.localdomain", "stdout_lines": ["Removed label mon from host np0005548787.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', 'mgr']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548787.localdomain", "mgr"], "delta": "0:00:00.682706", "end": "2025-12-06 10:08:44.268554", "item": ["np0005548787.localdomain", "mgr"], "msg": "", "rc": 0, "start": "2025-12-06 10:08:43.585848", "stderr": "", "stderr_lines": [], "stdout": "Removed label mgr from host np0005548787.localdomain", "stdout_lines": ["Removed label mgr from host np0005548787.localdomain"]} changed: [np0005548785.localdomain] => (item=['np0005548787.localdomain', '_admin']) => {"ansible_loop_var": "item", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "label", "rm", "np0005548787.localdomain", "_admin"], "delta": "0:00:00.614762", "end": "2025-12-06 10:08:45.444251", "item": ["np0005548787.localdomain", "_admin"], "msg": "", "rc": 0, "start": "2025-12-06 10:08:44.829489", "stderr": "", "stderr_lines": [], "stdout": "Removed label _admin from host np0005548787.localdomain", "stdout_lines": ["Removed label _admin from host np0005548787.localdomain"]} TASK [ceph_migrate : Wait for the orchestrator to remove labels] *************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:08:45.591025", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:08:55.602939", "user_input": ""} TASK [ceph_migrate : DRAIN - Drain the host] *********************************** changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "drain", "np0005548787.localdomain"], "delta": "0:00:00.718951", "end": "2025-12-06 10:08:56.984937", "msg": "", "rc": 0, "start": "2025-12-06 10:08:56.265986", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to remove the following daemons from host 'np0005548787.localdomain'\ntype id \n-------------------- ---------------\ncrash np0005548787 ", "stdout_lines": ["Scheduled to remove the following daemons from host 'np0005548787.localdomain'", "type id ", "-------------------- ---------------", "crash np0005548787 "]} TASK [ceph_migrate : DRAIN - cleanup the host] ********************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "force_clean | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - check host in hostmap] ****************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "ls", "--host_pattern", "np0005548787.localdomain", "-f", "json"], "delta": "0:00:00.760720", "end": "2025-12-06 10:08:58.383418", "msg": "", "rc": 0, "start": "2025-12-06 10:08:57.622698", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"addr\": \"192.168.122.105\", \"hostname\": \"np0005548787.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.105\", \"hostname\": \"np0005548787.localdomain\", \"labels\": [\"_no_schedule\", \"_no_conf_keyring\"], \"status\": \"\"}]"]} TASK [ceph_migrate : MON - rm the cur_mon host from the Ceph cluster] ********** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "host", "rm", "np0005548787.localdomain", "--force"], "delta": "0:00:00.716015", "end": "2025-12-06 10:08:59.711078", "msg": "", "rc": 0, "start": "2025-12-06 10:08:58.995063", "stderr": "", "stderr_lines": [], "stdout": "Removed host 'np0005548787.localdomain'", "stdout_lines": ["Removed host 'np0005548787.localdomain'"]} TASK [ceph_migrate : MON - Get current mon IP address] ************************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "mon_ipaddr is not defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - Migrate the mon ip address from src to target node] *** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/network.yaml for np0005548785.localdomain TASK [ceph_migrate : MON - Print current mon IP address] *********************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Patch net-config config and remove the IP address (src node)] *** changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.476357.2025-12-06@10:09:00~", "changed": true, "found": 1, "msg": "1 line(s) removed"} TASK [ceph_migrate : MON - Refresh os-net-config (src node)] ******************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not manual_migration | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - manually rm the ip address (src node)] ************** changed: [np0005548785.localdomain -> np0005548787.localdomain(192.168.122.105)] => {"changed": true, "cmd": ["ip", "a", "del", "172.18.0.105/24", "dev", "vlan21"], "delta": "0:00:00.005184", "end": "2025-12-06 10:09:01.383825", "msg": "", "rc": 0, "start": "2025-12-06 10:09:01.378641", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - Patch net-config config and add the IP address (target node)] *** changed: [np0005548785.localdomain -> np0005548790.localdomain(192.168.122.108)] => {"backup": "/etc/os-net-config/tripleo_config.yaml.300108.2025-12-06@10:09:02~", "changed": true, "msg": "line added"} TASK [ceph_migrate : MON - Refresh os-net-config (target_node)] **************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "not manual_migration | bool | default(false)", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : MON - statically assign the ip address to the target node] *** changed: [np0005548785.localdomain -> np0005548790.localdomain(192.168.122.108)] => {"changed": true, "cmd": ["ip", "a", "add", "172.18.0.105/24", "dev", "vlan21"], "delta": "0:00:00.005021", "end": "2025-12-06 10:09:03.465416", "msg": "", "rc": 0, "start": "2025-12-06 10:09:03.460395", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : MON - ping ip address to see if is reachable on the target node] *** changed: [np0005548785.localdomain -> np0005548790.localdomain(192.168.122.108)] => {"changed": true, "cmd": ["ping", "-W1", "-c", "3", "172.18.0.105"], "delta": "0:00:02.076489", "end": "2025-12-06 10:09:06.367389", "msg": "", "rc": 0, "start": "2025-12-06 10:09:04.290900", "stderr": "", "stderr_lines": [], "stdout": "PING 172.18.0.105 (172.18.0.105) 56(84) bytes of data.\n64 bytes from 172.18.0.105: icmp_seq=1 ttl=64 time=0.057 ms\n64 bytes from 172.18.0.105: icmp_seq=2 ttl=64 time=0.053 ms\n64 bytes from 172.18.0.105: icmp_seq=3 ttl=64 time=0.046 ms\n\n--- 172.18.0.105 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2070ms\nrtt min/avg/max/mdev = 0.046/0.052/0.057/0.004 ms", "stdout_lines": ["PING 172.18.0.105 (172.18.0.105) 56(84) bytes of data.", "64 bytes from 172.18.0.105: icmp_seq=1 ttl=64 time=0.057 ms", "64 bytes from 172.18.0.105: icmp_seq=2 ttl=64 time=0.053 ms", "64 bytes from 172.18.0.105: icmp_seq=3 ttl=64 time=0.046 ms", "", "--- 172.18.0.105 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2070ms", "rtt min/avg/max/mdev = 0.046/0.052/0.057/0.004 ms"]} TASK [ceph_migrate : MON - Fail if the IP address is not active in the target node] *** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ping_target_ip.rc != 0", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Unmanage mons] ******************************************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.652485", "end": "2025-12-06 10:09:07.695997", "rc": 0, "start": "2025-12-06 10:09:07.043512", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : Print the resulting spec] ********************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Get tmp mon] **************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548790", "-f", "json"], "delta": "0:00:03.745654", "end": "2025-12-06 10:09:12.082703", "msg": "", "rc": 0, "start": "2025-12-06 10:09:08.337049", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"c2f79d602097\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.04%\", \"created\": \"2025-12-06T10:04:01.810794Z\", \"daemon_id\": \"np0005548790\", \"daemon_name\": \"mon.np0005548790\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:08:45.389266Z daemon:mon.np0005548790 [INFO] \\\"Reconfigured mon.np0005548790 on host 'np0005548790.localdomain'\\\"\"], \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:08:22.897444Z\", \"memory_request\": 2147483648, \"memory_usage\": 68692213, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:01.714811Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"c2f79d602097\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"3.04%\", \"created\": \"2025-12-06T10:04:01.810794Z\", \"daemon_id\": \"np0005548790\", \"daemon_name\": \"mon.np0005548790\", \"daemon_type\": \"mon\", \"events\": [\"2025-12-06T10:08:45.389266Z daemon:mon.np0005548790 [INFO] \\\"Reconfigured mon.np0005548790 on host 'np0005548790.localdomain'\\\"\"], \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:08:22.897444Z\", \"memory_request\": 2147483648, \"memory_usage\": 68692213, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:04:01.714811Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : MON - Delete the running mon] ***************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "rm", "mon.np0005548790", "--force"], "delta": "0:00:02.318583", "end": "2025-12-06 10:09:15.022051", "msg": "", "rc": 0, "start": "2025-12-06 10:09:12.703468", "stderr": "", "stderr_lines": [], "stdout": "Removed mon.np0005548790 from host 'np0005548790.localdomain'", "stdout_lines": ["Removed mon.np0005548790 from host 'np0005548790.localdomain'"]} TASK [ceph_migrate : MON - Wait for the current mon to be deleted] ************* Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:09:15.170819", "stderr": "", "stdout": "Paused for 10.02 seconds", "stop": "2025-12-06 10:09:25.186587", "user_input": ""} TASK [ceph_migrate : MON - Redeploy mon on np0005548790.localdomain] *********** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : MON - Redeploy mon on np0005548790.localdomain] *********** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "daemon", "add", "mon", "np0005548790.localdomain:172.18.0.105"], "delta": "0:00:03.380488", "end": "2025-12-06 10:09:29.167531", "msg": "", "rc": 0, "start": "2025-12-06 10:09:25.787043", "stderr": "", "stderr_lines": [], "stdout": "Deployed mon.np0005548790 on host 'np0005548790.localdomain'", "stdout_lines": ["Deployed mon.np0005548790 on host 'np0005548790.localdomain'"]} TASK [ceph_migrate : MON - Wait for the spec to be updated] ******************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:09:29.303229", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2025-12-06 10:09:39.306091", "user_input": ""} TASK [ceph_migrate : MON - Check mons quorum] ********************************** changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "-s", "-f", "json"], "delta": "0:00:00.748932", "end": "2025-12-06 10:09:40.485558", "msg": "", "rc": 0, "start": "2025-12-06 10:09:39.736626", "stderr": "", "stderr_lines": [], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548788\",\"np0005548789\",\"np0005548790\"],\"quorum_age\":3,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":92,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":597700608,\"bytes_avail\":44474290176,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2025-12-06T10:09:27.302044+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548788\",\"np0005548789\",\"np0005548790\"],\"quorum_age\":3,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":92,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109575456,\"bytes_used\":597700608,\"bytes_avail\":44474290176,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":4,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2025-12-06T10:09:27.302044+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : reconfig osds] ******************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "reconfig", "osd.default_drive_group"], "delta": "0:00:03.904712", "end": "2025-12-06 10:09:45.009638", "msg": "", "rc": 0, "start": "2025-12-06 10:09:41.104926", "stderr": "", "stderr_lines": [], "stdout": "Scheduled to reconfig osd.2 on host 'np0005548788.localdomain'\nScheduled to reconfig osd.5 on host 'np0005548788.localdomain'\nScheduled to reconfig osd.1 on host 'np0005548789.localdomain'\nScheduled to reconfig osd.4 on host 'np0005548789.localdomain'\nScheduled to reconfig osd.0 on host 'np0005548790.localdomain'\nScheduled to reconfig osd.3 on host 'np0005548790.localdomain'", "stdout_lines": ["Scheduled to reconfig osd.2 on host 'np0005548788.localdomain'", "Scheduled to reconfig osd.5 on host 'np0005548788.localdomain'", "Scheduled to reconfig osd.1 on host 'np0005548789.localdomain'", "Scheduled to reconfig osd.4 on host 'np0005548789.localdomain'", "Scheduled to reconfig osd.0 on host 'np0005548790.localdomain'", "Scheduled to reconfig osd.3 on host 'np0005548790.localdomain'"]} TASK [ceph_migrate : force-fail ceph mgr] ************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/fail_mgr.yaml for np0005548785.localdomain TASK [ceph_migrate : Refresh ceph_cli] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_cli.yaml for np0005548785.localdomain TASK [ceph_migrate : Set ceph CLI] ********************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph_cli": "podman run --rm --net=host --ipc=host --volume /home/tripleo-admin/ceph_client:/etc/ceph:z --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring "}, "changed": false} TASK [ceph_migrate : Force fail ceph mgr] ************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:03.799026", "end": "2025-12-06 10:09:49.591940", "msg": "", "rc": 0, "start": "2025-12-06 10:09:45.792914", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Wait for cephadm to reconcile] **************************** Pausing for 10 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2025-12-06 10:09:49.714329", "stderr": "", "stdout": "Paused for 10.01 seconds", "stop": "2025-12-06 10:09:59.727829", "user_input": ""} TASK [ceph_migrate : Get the ceph orchestrator status with] ******************** ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC POLL on np0005548785.localdomain: jid=j238407579275.494621 started=1 finished=0 ASYNC FAILED on np0005548785.localdomain: jid=j238407579275.494621 ok: [np0005548785.localdomain] => {"ansible_job_id": "j238407579275.494621", "changed": false, "child_pid": 494625, "failed_when_result": false, "finished": 1, "msg": "Timeout exceeded", "results_file": "/root/.ansible_async/j238407579275.494621", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Restart the active mgr] *********************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.929876", "end": "2025-12-06 10:10:38.111520", "msg": "", "rc": 0, "start": "2025-12-06 10:10:37.181644", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Fail if ceph orchestrator is still not responding] ******** ASYNC OK on np0005548785.localdomain: jid=j674688952853.496733 changed: [np0005548785.localdomain] => {"ansible_job_id": "j674688952853.496733", "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "status", "--format", "json"], "delta": "0:00:00.794497", "end": "2025-12-06 10:10:39.728042", "finished": 1, "msg": "", "rc": 0, "results_file": "/root/.ansible_async/j674688952853.496733", "start": "2025-12-06 10:10:38.933545", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "\n{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}", "stdout_lines": ["", "{\"available\": true, \"backend\": \"cephadm\", \"paused\": false, \"workers\": 10}"]} TASK [ceph_migrate : MON - Manage mons] **************************************** ok: [np0005548785.localdomain] => {"changed": false, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "-v", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "-v", "/var/lib/ceph/:/var/lib/ceph/:z", "-v", "/var/log/ceph/:/var/log/ceph/:z", "-v", "/home/tripleo-admin/mon:/home/tripleo-admin/mon:z", "--entrypoint=ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "orch", "apply", "--in-file", "/home/tripleo-admin/mon"], "delta": "0:00:00.833473", "end": "2025-12-06 10:10:42.199916", "rc": 0, "start": "2025-12-06 10:10:41.366443", "stderr": "", "stderr_lines": [], "stdout": "Scheduled mon update...", "stdout_lines": ["Scheduled mon update..."]} TASK [ceph_migrate : MON - wait daemons] *************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/wait_daemons.yaml for np0005548785.localdomain TASK [ceph_migrate : print daemon id option] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : wait for mon] ********************************************* changed: [np0005548785.localdomain] => {"attempts": 1, "changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "orch", "ps", "--daemon_type", "mon", "--daemon_id", "np0005548790", "-f", "json"], "delta": "0:00:00.769595", "end": "2025-12-06 10:10:43.650744", "msg": "", "rc": 0, "start": "2025-12-06 10:10:42.881149", "stderr": "", "stderr_lines": [], "stdout": "\n[{\"container_id\": \"d41db198b5de\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.26%\", \"created\": \"2025-12-06T10:09:28.898475Z\", \"daemon_id\": \"np0005548790\", \"daemon_name\": \"mon.np0005548790\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:10:40.390933Z\", \"memory_request\": 2147483648, \"memory_usage\": 40705720, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:09:28.764279Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]", "stdout_lines": ["", "[{\"container_id\": \"d41db198b5de\", \"container_image_digests\": [\"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:4d2f9dc5b2b33ee1c77bbfabcbbb9f4d94d343b04c4de2e4f8b3b81a1f0fd2fe\", \"registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\"], \"container_image_id\": \"273c05181f81a0e9697960e2eb83e7f1eabed71b98a5f20975a520dc0ec39e12\", \"container_image_name\": \"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\", \"cpu_percentage\": \"2.26%\", \"created\": \"2025-12-06T10:09:28.898475Z\", \"daemon_id\": \"np0005548790\", \"daemon_name\": \"mon.np0005548790\", \"daemon_type\": \"mon\", \"hostname\": \"np0005548790.localdomain\", \"is_active\": false, \"last_refresh\": \"2025-12-06T10:10:40.390933Z\", \"memory_request\": 2147483648, \"memory_usage\": 40705720, \"ports\": [], \"service_name\": \"mon\", \"started\": \"2025-12-06T10:09:28.764279Z\", \"status\": 1, \"status_desc\": \"running\", \"version\": \"18.2.1-361.el9cp\"}]"]} TASK [ceph_migrate : Sleep before moving to the next mon] ********************** Pausing for 30 seconds ok: [np0005548785.localdomain] => {"changed": false, "delta": 30, "echo": true, "rc": 0, "start": "2025-12-06 10:10:43.846454", "stderr": "", "stdout": "Paused for 30.02 seconds", "stop": "2025-12-06 10:11:13.870451", "user_input": ""} TASK [ceph_migrate : POST - Dump logs] ***************************************** included: /home/zuul/src/github.com/openstack-k8s-operators/data-plane-adoption/tests/roles/ceph_migrate/tasks/ceph_load.yaml for np0005548785.localdomain TASK [ceph_migrate : Check file in the src directory] ************************** ok: [np0005548785.localdomain] => {"changed": false, "examined": 4, "files": [{"atime": 1765015464.9271553, "ctime": 1765015464.7251492, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 394325652, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765015411.1415088, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.conf", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 142, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765015464.9331555, "ctime": 1765015464.7251492, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 285301600, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765007879.1861506, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 151, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765009135.615282, "ctime": 1765015464.7251492, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 285301601, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765009133.328211, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.openstack.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 231, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, {"atime": 1765009136.7633176, "ctime": 1765015464.7251492, "dev": 64516, "gid": 0, "gr_name": "root", "inode": 285301602, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1765009134.2082384, "nlink": 1, "path": "/home/tripleo-admin/ceph_client/ceph.client.manila.keyring", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 153, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 4, "msg": "All paths examined", "skipped_paths": {}} TASK [ceph_migrate : Restore files] ******************************************** changed: [np0005548785.localdomain] => (item=ceph.conf) => {"ansible_loop_var": "item", "changed": true, "checksum": "268185d9e6258eec13ecf92b158b41045a26873f", "dest": "/etc/ceph/ceph.conf", "gid": 0, "group": "root", "item": "ceph.conf", "md5sum": "a8842acf8fb69e0f3f6d14c97d137d87", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 142, "src": "/home/tripleo-admin/ceph_client/ceph.conf", "state": "file", "uid": 0} changed: [np0005548785.localdomain] => (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": true, "checksum": "904318df38203f10f7fb10e4cc9586ba0770617d", "dest": "/etc/ceph/ceph.client.admin.keyring", "gid": 0, "group": "root", "item": "ceph.client.admin.keyring", "md5sum": "7352b81dd3becac122410b306a619c64", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 151, "src": "/home/tripleo-admin/ceph_client/ceph.client.admin.keyring", "state": "file", "uid": 0} TASK [ceph_migrate : Ensure backup directory exists] *************************** changed: [np0005548785.localdomain] => {"changed": true, "gid": 1003, "group": "tripleo-admin", "mode": "0755", "owner": "tripleo-admin", "path": "/home/tripleo-admin/ceph_client/logs", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 1003} TASK [ceph_migrate : Get Ceph Health] ****************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "-s", "-f", "json"], "delta": "0:00:03.075043", "end": "2025-12-06 10:11:19.843409", "msg": "", "rc": 0, "start": "2025-12-06 10:11:16.768366", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548788\",\"np0005548789\",\"np0005548790\"],\"quorum_age\":103,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":94,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109577704,\"bytes_used\":616390656,\"bytes_avail\":44455600128,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2025-12-06T10:09:27.302044+0000\",\"services\":{}},\"progress_events\":{}}", "stdout_lines": ["", "{\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"health\":{\"status\":\"HEALTH_WARN\",\"checks\":{\"CEPHADM_STRAY_DAEMON\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false},\"CEPHADM_STRAY_HOST\":{\"severity\":\"HEALTH_WARN\",\"summary\":{\"message\":\"1 stray host(s) with 1 daemon(s) not managed by cephadm\",\"count\":1},\"muted\":false}},\"mutes\":[]},\"election_epoch\":68,\"quorum\":[0,1,2],\"quorum_names\":[\"np0005548788\",\"np0005548789\",\"np0005548790\"],\"quorum_age\":103,\"monmap\":{\"epoch\":15,\"min_mon_release_name\":\"reef\",\"num_mons\":3},\"osdmap\":{\"epoch\":94,\"num_osds\":6,\"num_up_osds\":6,\"osd_up_since\":1765008020,\"num_in_osds\":6,\"osd_in_since\":1765007998,\"num_remapped_pgs\":0},\"pgmap\":{\"pgs_by_state\":[{\"state_name\":\"active+clean\",\"count\":177}],\"num_pgs\":177,\"num_pools\":7,\"num_objects\":69,\"data_bytes\":109577704,\"bytes_used\":616390656,\"bytes_avail\":44455600128,\"bytes_total\":45071990784},\"fsmap\":{\"epoch\":16,\"id\":1,\"up\":1,\"in\":1,\"max\":1,\"by_rank\":[{\"filesystem_id\":1,\"rank\":0,\"name\":\"mds.np0005548790.vhcezv\",\"status\":\"up:active\",\"gid\":26356}],\"up:standby\":2},\"mgrmap\":{\"available\":true,\"num_standbys\":3,\"modules\":[\"cephadm\",\"iostat\",\"nfs\",\"restful\"],\"services\":{}},\"servicemap\":{\"epoch\":82,\"modified\":\"2025-12-06T10:09:27.302044+0000\",\"services\":{}},\"progress_events\":{}}"]} TASK [ceph_migrate : Load ceph data] ******************************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"ceph": {"election_epoch": 68, "fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "fsmap": {"by_rank": [{"filesystem_id": 1, "gid": 26356, "name": "mds.np0005548790.vhcezv", "rank": 0, "status": "up:active"}], "epoch": 16, "id": 1, "in": 1, "max": 1, "up": 1, "up:standby": 2}, "health": {"checks": {"CEPHADM_STRAY_DAEMON": {"muted": false, "severity": "HEALTH_WARN", "summary": {"count": 1, "message": "1 stray daemon(s) not managed by cephadm"}}, "CEPHADM_STRAY_HOST": {"muted": false, "severity": "HEALTH_WARN", "summary": {"count": 1, "message": "1 stray host(s) with 1 daemon(s) not managed by cephadm"}}}, "mutes": [], "status": "HEALTH_WARN"}, "mgrmap": {"available": true, "modules": ["cephadm", "iostat", "nfs", "restful"], "num_standbys": 3, "services": {}}, "monmap": {"epoch": 15, "min_mon_release_name": "reef", "num_mons": 3}, "osdmap": {"epoch": 94, "num_in_osds": 6, "num_osds": 6, "num_remapped_pgs": 0, "num_up_osds": 6, "osd_in_since": 1765007998, "osd_up_since": 1765008020}, "pgmap": {"bytes_avail": 44455600128, "bytes_total": 45071990784, "bytes_used": 616390656, "data_bytes": 109577704, "num_objects": 69, "num_pgs": 177, "num_pools": 7, "pgs_by_state": [{"count": 177, "state_name": "active+clean"}]}, "progress_events": {}, "quorum": [0, 1, 2], "quorum_age": 103, "quorum_names": ["np0005548788", "np0005548789", "np0005548790"], "servicemap": {"epoch": 82, "modified": "2025-12-06T10:09:27.302044+0000", "services": {}}}}, "changed": false} TASK [ceph_migrate : Dump ceph -s output to log file] ************************** changed: [np0005548785.localdomain] => {"changed": true, "checksum": "c41de4b34b0e5c1622eb496ee01a72d5ec0982bb", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_health.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "bb8f97fd843ceab5bd2b2587c4398496", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1437, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015880.0507374-62147-22933028795853/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph Orch ServiceMap] ********************************* changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "ls", "-f", "json"], "delta": "0:00:03.124894", "end": "2025-12-06 10:11:24.618793", "msg": "", "rc": 0, "start": "2025-12-06 10:11:21.493899", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n[{\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2025-12-06T07:57:51.164915Z\", \"last_refresh\": \"2025-12-06T10:10:40.039426Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mds\"}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2025-12-06T10:02:08.109056Z\", \"last_refresh\": \"2025-12-06T10:10:40.039854Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mgr\"}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2025-12-06T10:03:37.241143Z\", \"last_refresh\": \"2025-12-06T10:10:40.039976Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T10:10:44.831786Z service:mon [INFO] \\\"service was created\\\"\"], \"placement\": {\"label\": \"mon\"}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2025-12-06T10:10:41.972329Z\", \"last_refresh\": \"2025-12-06T10:10:40.040097Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2025-12-06T07:58:05.620181Z\", \"running\": 0, \"size\": 0}}, {\"placement\": {\"hosts\": [\"np0005548788.localdomain\", \"np0005548789.localdomain\", \"np0005548790.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2025-12-06T07:58:35.648091Z\", \"last_refresh\": \"2025-12-06T10:10:40.039602Z\", \"running\": 6, \"size\": 6}}]", "stdout_lines": ["", "[{\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"crash\", \"service_type\": \"crash\", \"status\": {\"created\": \"2025-12-06T07:57:51.164915Z\", \"last_refresh\": \"2025-12-06T10:10:40.039426Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mds\"}, \"service_id\": \"mds\", \"service_name\": \"mds.mds\", \"service_type\": \"mds\", \"status\": {\"created\": \"2025-12-06T10:02:08.109056Z\", \"last_refresh\": \"2025-12-06T10:10:40.039854Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"label\": \"mgr\"}, \"service_name\": \"mgr\", \"service_type\": \"mgr\", \"status\": {\"created\": \"2025-12-06T10:03:37.241143Z\", \"last_refresh\": \"2025-12-06T10:10:40.039976Z\", \"running\": 3, \"size\": 3}}, {\"events\": [\"2025-12-06T10:10:44.831786Z service:mon [INFO] \\\"service was created\\\"\"], \"placement\": {\"label\": \"mon\"}, \"service_name\": \"mon\", \"service_type\": \"mon\", \"status\": {\"created\": \"2025-12-06T10:10:41.972329Z\", \"last_refresh\": \"2025-12-06T10:10:40.040097Z\", \"running\": 3, \"size\": 3}}, {\"placement\": {\"host_pattern\": \"*\"}, \"service_name\": \"node-proxy\", \"service_type\": \"node-proxy\", \"status\": {\"created\": \"2025-12-06T07:58:05.620181Z\", \"running\": 0, \"size\": 0}}, {\"placement\": {\"hosts\": [\"np0005548788.localdomain\", \"np0005548789.localdomain\", \"np0005548790.localdomain\"]}, \"service_id\": \"default_drive_group\", \"service_name\": \"osd.default_drive_group\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"paths\": [\"/dev/ceph_vg0/ceph_lv0\", \"/dev/ceph_vg1/ceph_lv1\"]}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}, \"status\": {\"created\": \"2025-12-06T07:58:35.648091Z\", \"last_refresh\": \"2025-12-06T10:10:40.039602Z\", \"running\": 6, \"size\": 6}}]"]} TASK [ceph_migrate : Load Service Map] ***************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"servicemap": [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2025-12-06T07:57:51.164915Z", "last_refresh": "2025-12-06T10:10:40.039426Z", "running": 3, "size": 3}}, {"placement": {"label": "mds"}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2025-12-06T10:02:08.109056Z", "last_refresh": "2025-12-06T10:10:40.039854Z", "running": 3, "size": 3}}, {"placement": {"label": "mgr"}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2025-12-06T10:03:37.241143Z", "last_refresh": "2025-12-06T10:10:40.039976Z", "running": 3, "size": 3}}, {"events": ["2025-12-06T10:10:44.831786Z service:mon [INFO] \"service was created\""], "placement": {"label": "mon"}, "service_name": "mon", "service_type": "mon", "status": {"created": "2025-12-06T10:10:41.972329Z", "last_refresh": "2025-12-06T10:10:40.040097Z", "running": 3, "size": 3}}, {"placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2025-12-06T07:58:05.620181Z", "running": 0, "size": 0}}, {"placement": {"hosts": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2025-12-06T07:58:35.648091Z", "last_refresh": "2025-12-06T10:10:40.039602Z", "running": 6, "size": 6}}]}, "changed": false} TASK [ceph_migrate : Print Service Map] **************************************** skipping: [np0005548785.localdomain] => (item={'placement': {'host_pattern': '*'}, 'service_name': 'crash', 'service_type': 'crash', 'status': {'created': '2025-12-06T07:57:51.164915Z', 'last_refresh': '2025-12-06T10:10:40.039426Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2025-12-06T07:57:51.164915Z", "last_refresh": "2025-12-06T10:10:40.039426Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'placement': {'label': 'mds'}, 'service_id': 'mds', 'service_name': 'mds.mds', 'service_type': 'mds', 'status': {'created': '2025-12-06T10:02:08.109056Z', 'last_refresh': '2025-12-06T10:10:40.039854Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"label": "mds"}, "service_id": "mds", "service_name": "mds.mds", "service_type": "mds", "status": {"created": "2025-12-06T10:02:08.109056Z", "last_refresh": "2025-12-06T10:10:40.039854Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'placement': {'label': 'mgr'}, 'service_name': 'mgr', 'service_type': 'mgr', 'status': {'created': '2025-12-06T10:03:37.241143Z', 'last_refresh': '2025-12-06T10:10:40.039976Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"label": "mgr"}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2025-12-06T10:03:37.241143Z", "last_refresh": "2025-12-06T10:10:40.039976Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'events': ['2025-12-06T10:10:44.831786Z service:mon [INFO] "service was created"'], 'placement': {'label': 'mon'}, 'service_name': 'mon', 'service_type': 'mon', 'status': {'created': '2025-12-06T10:10:41.972329Z', 'last_refresh': '2025-12-06T10:10:40.040097Z', 'running': 3, 'size': 3}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"events": ["2025-12-06T10:10:44.831786Z service:mon [INFO] \"service was created\""], "placement": {"label": "mon"}, "service_name": "mon", "service_type": "mon", "status": {"created": "2025-12-06T10:10:41.972329Z", "last_refresh": "2025-12-06T10:10:40.040097Z", "running": 3, "size": 3}}} skipping: [np0005548785.localdomain] => (item={'placement': {'host_pattern': '*'}, 'service_name': 'node-proxy', 'service_type': 'node-proxy', 'status': {'created': '2025-12-06T07:58:05.620181Z', 'running': 0, 'size': 0}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"host_pattern": "*"}, "service_name": "node-proxy", "service_type": "node-proxy", "status": {"created": "2025-12-06T07:58:05.620181Z", "running": 0, "size": 0}}} skipping: [np0005548785.localdomain] => (item={'placement': {'hosts': ['np0005548788.localdomain', 'np0005548789.localdomain', 'np0005548790.localdomain']}, 'service_id': 'default_drive_group', 'service_name': 'osd.default_drive_group', 'service_type': 'osd', 'spec': {'data_devices': {'paths': ['/dev/ceph_vg0/ceph_lv0', '/dev/ceph_vg1/ceph_lv1']}, 'filter_logic': 'AND', 'objectstore': 'bluestore'}, 'status': {'created': '2025-12-06T07:58:35.648091Z', 'last_refresh': '2025-12-06T10:10:40.039602Z', 'running': 6, 'size': 6}}) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": {"placement": {"hosts": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1"]}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2025-12-06T07:58:35.648091Z", "last_refresh": "2025-12-06T10:10:40.039602Z", "running": 6, "size": 6}}} skipping: [np0005548785.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch ls output to log file] ********************* changed: [np0005548785.localdomain] => {"changed": true, "checksum": "3e46475d379e94387c6612d87709da0db6a9fc56", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_orch_ls.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "5589d533672f17859b8c01c19ebb88b7", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1600, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015884.89336-62176-15534371762009/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph config] ****************************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "config", "dump", "-f", "json"], "delta": "0:00:03.202210", "end": "2025-12-06 10:11:29.534837", "msg": "", "rc": 0, "start": "2025-12-06 10:11:26.332627", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_haproxy\",\"value\":\"registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_keepalived\",\"value\":\"registry.redhat.io/rhceph/keepalived-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709082009\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548788\",\"location_type\":\"host\",\"location_value\":\"np0005548788\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548789\",\"location_type\":\"host\",\"location_value\":\"np0005548789\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548790\",\"location_type\":\"host\",\"location_value\":\"np0005548790\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.np0005548790.vhcezv\",\"name\":\"mds_join_fs\",\"value\":\"cephfs\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]", "stdout_lines": ["", "[{\"section\":\"global\",\"name\":\"cluster_network\",\"value\":\"172.20.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"container_image\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"basic\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv4\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"ms_bind_ipv6\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"global\",\"name\":\"public_network\",\"value\":\"172.18.0.0/24\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mon\",\"name\":\"auth_allow_insecure_global_id_reclaim\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_base\",\"value\":\"registry.redhat.io/rhceph/rhceph-7-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_haproxy\",\"value\":\"registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_image_keepalived\",\"value\":\"registry.redhat.io/rhceph/keepalived-rhel9:latest\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/container_init\",\"value\":\"True\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/migration_current\",\"value\":\"7\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/cephadm/use_repo_digest\",\"value\":\"false\",\"level\":\"advanced\",\"can_update_at_runtime\":false,\"mask\":\"\"},{\"section\":\"mgr\",\"name\":\"mgr/orchestrator/orchestrator\",\"value\":\"cephadm\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709082009\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548788\",\"location_type\":\"host\",\"location_value\":\"np0005548788\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548789\",\"location_type\":\"host\",\"location_value\":\"np0005548789\"},{\"section\":\"osd\",\"name\":\"osd_memory_target\",\"value\":\"5709084876\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"host:np0005548790\",\"location_type\":\"host\",\"location_value\":\"np0005548790\"},{\"section\":\"osd\",\"name\":\"osd_memory_target_autotune\",\"value\":\"true\",\"level\":\"advanced\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.mds\",\"name\":\"mds_join_fs\",\"value\":\"mds\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"},{\"section\":\"mds.np0005548790.vhcezv\",\"name\":\"mds_join_fs\",\"value\":\"cephfs\",\"level\":\"basic\",\"can_update_at_runtime\":true,\"mask\":\"\"}]"]} TASK [ceph_migrate : Print Ceph config dump] *********************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph config dump output to log file] ***************** changed: [np0005548785.localdomain] => {"changed": true, "checksum": "496d1655ef49320f2416c6054fc7f54fde4fb543", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_config_dump.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "2923d689586e1c55f478f40b9546b2b9", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 3044, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015889.7403615-62203-221242660558165/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph Orch Host Map] *********************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "orch", "host", "ls", "-f", "json"], "delta": "0:00:03.138382", "end": "2025-12-06 10:11:34.438836", "msg": "", "rc": 0, "start": "2025-12-06 10:11:31.300454", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4"], "stdout": "\n[{\"addr\": \"192.168.122.106\", \"hostname\": \"np0005548788.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005548789.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005548790.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}]", "stdout_lines": ["", "[{\"addr\": \"192.168.122.106\", \"hostname\": \"np0005548788.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.107\", \"hostname\": \"np0005548789.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}, {\"addr\": \"192.168.122.108\", \"hostname\": \"np0005548790.localdomain\", \"labels\": [\"osd\", \"mds\", \"mgr\", \"mon\", \"_admin\"], \"status\": \"\"}]"]} TASK [ceph_migrate : Load nodes] *********************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"nds": [{"addr": "192.168.122.106", "hostname": "np0005548788.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}, {"addr": "192.168.122.107", "hostname": "np0005548789.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}, {"addr": "192.168.122.108", "hostname": "np0005548790.localdomain", "labels": ["osd", "mds", "mgr", "mon", "_admin"], "status": ""}]}, "changed": false} TASK [ceph_migrate : Load hostmap List] **************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"hostmap": {"np0005548788.localdomain": ["osd", "mds", "mgr", "mon", "_admin"], "np0005548789.localdomain": ["osd", "mds", "mgr", "mon", "_admin"], "np0005548790.localdomain": ["osd", "mds", "mgr", "mon", "_admin"]}}, "changed": false} TASK [ceph_migrate : Print Host Map] ******************************************* skipping: [np0005548785.localdomain] => (item=np0005548788.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548788.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548789.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548789.localdomain"} skipping: [np0005548785.localdomain] => (item=np0005548790.localdomain) => {"ansible_loop_var": "item", "false_condition": "debug | default(false)", "item": "np0005548790.localdomain"} skipping: [np0005548785.localdomain] => {"msg": "All items skipped"} TASK [ceph_migrate : Dump ceph orch host ls output to log file] **************** changed: [np0005548785.localdomain] => {"changed": true, "checksum": "63ed7327993650e237d65e8b844d21c13485d050", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_orch_host_ls.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "58650c5d4a1a1701e49c777b4b7919ee", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 84, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015894.7351935-62234-78013081417743/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Get Ceph monmap and load data] **************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["cephadm", "shell", "--", "ceph", "mon", "dump", "-f", "json"], "delta": "0:00:03.164046", "end": "2025-12-06 10:11:39.376385", "msg": "", "rc": 0, "start": "2025-12-06 10:11:36.212339", "stderr": "Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8\nInferring config /etc/ceph/ceph.conf\nUsing ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC\nregistry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4\ndumped monmap epoch 15", "stderr_lines": ["Inferring fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8", "Inferring config /etc/ceph/ceph.conf", "Using ceph image with id '273c05181f81' and tag 'latest' created on 2025-11-26 19:45:11 +0000 UTC", "registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:04038939b8141c15690c90ad77253fe2c4e8c5f75ae28b709b459441bbe41de4", "dumped monmap epoch 15"], "stdout": "\n{\"epoch\":15,\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"modified\":\"2025-12-06T10:09:29.475464Z\",\"created\":\"2025-12-06T07:57:14.295835Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005548788\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005548789\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005548790\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}", "stdout_lines": ["", "{\"epoch\":15,\"fsid\":\"1939e851-b10c-5c3b-9bb7-8e7f380233e8\",\"modified\":\"2025-12-06T10:09:29.475464Z\",\"created\":\"2025-12-06T07:57:14.295835Z\",\"min_mon_release\":18,\"min_mon_release_name\":\"reef\",\"election_strategy\":1,\"disallowed_leaders: \":\"\",\"stretch_mode\":false,\"tiebreaker_mon\":\"\",\"removed_ranks: \":\"\",\"features\":{\"persistent\":[\"kraken\",\"luminous\",\"mimic\",\"osdmap-prune\",\"nautilus\",\"octopus\",\"pacific\",\"elector-pinging\",\"quincy\",\"reef\"],\"optional\":[]},\"mons\":[{\"rank\":0,\"name\":\"np0005548788\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.103:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.103:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.103:6789/0\",\"public_addr\":\"172.18.0.103:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":1,\"name\":\"np0005548789\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.104:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.104:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.104:6789/0\",\"public_addr\":\"172.18.0.104:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"},{\"rank\":2,\"name\":\"np0005548790\",\"public_addrs\":{\"addrvec\":[{\"type\":\"v2\",\"addr\":\"172.18.0.105:3300\",\"nonce\":0},{\"type\":\"v1\",\"addr\":\"172.18.0.105:6789\",\"nonce\":0}]},\"addr\":\"172.18.0.105:6789/0\",\"public_addr\":\"172.18.0.105:6789/0\",\"priority\":0,\"weight\":0,\"crush_location\":\"{}\"}],\"quorum\":[0,1,2]}"]} TASK [ceph_migrate : Get Monmap] *********************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"mon_dump": {"created": "2025-12-06T07:57:14.295835Z", "disallowed_leaders: ": "", "election_strategy": 1, "epoch": 15, "features": {"optional": [], "persistent": ["kraken", "luminous", "mimic", "osdmap-prune", "nautilus", "octopus", "pacific", "elector-pinging", "quincy", "reef"]}, "fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "min_mon_release": 18, "min_mon_release_name": "reef", "modified": "2025-12-06T10:09:29.475464Z", "mons": [{"addr": "172.18.0.103:6789/0", "crush_location": "{}", "name": "np0005548788", "priority": 0, "public_addr": "172.18.0.103:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.103:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.103:6789", "nonce": 0, "type": "v1"}]}, "rank": 0, "weight": 0}, {"addr": "172.18.0.104:6789/0", "crush_location": "{}", "name": "np0005548789", "priority": 0, "public_addr": "172.18.0.104:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.104:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.104:6789", "nonce": 0, "type": "v1"}]}, "rank": 1, "weight": 0}, {"addr": "172.18.0.105:6789/0", "crush_location": "{}", "name": "np0005548790", "priority": 0, "public_addr": "172.18.0.105:6789/0", "public_addrs": {"addrvec": [{"addr": "172.18.0.105:3300", "nonce": 0, "type": "v2"}, {"addr": "172.18.0.105:6789", "nonce": 0, "type": "v1"}]}, "rank": 2, "weight": 0}], "quorum": [0, 1, 2], "removed_ranks: ": "", "stretch_mode": false, "tiebreaker_mon": ""}}, "changed": false} TASK [ceph_migrate : Print monmap] ********************************************* skipping: [np0005548785.localdomain] => {"false_condition": "debug | default(false)"} TASK [ceph_migrate : Dump ceph mon dump output to log file] ******************** changed: [np0005548785.localdomain] => {"changed": true, "checksum": "63ff4002336a4ce4e306b325fbc2812c9f0e2504", "dest": "/home/tripleo-admin/ceph_client/logs/ceph_mon_dump.log", "gid": 1003, "group": "tripleo-admin", "md5sum": "a45ed6c29a7d2f028842ee9c61e7b3db", "mode": "0644", "owner": "tripleo-admin", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 1425, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015899.603046-62263-273694864787057/source", "state": "file", "uid": 1003} TASK [ceph_migrate : Load nodes to decommission] ******************************* ok: [np0005548785.localdomain] => {"ansible_facts": {"decomm_nodes": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "changed": false} TASK [ceph_migrate : Load target nodes] **************************************** ok: [np0005548785.localdomain] => {"ansible_facts": {"target_nodes": ["np0005548788.localdomain", "np0005548789.localdomain", "np0005548790.localdomain"]}, "changed": false} TASK [ceph_migrate : Print target nodes] *************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Print decomm_nodes] *************************************** skipping: [np0005548785.localdomain] => {"false_condition": "debug|default(false)"} TASK [ceph_migrate : Configure Swift to use rgw backend] *********************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "ceph_daemons_layout.rgw | default(true) | bool", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Flush handlers to ensure mgr restart completes] *********** RUNNING HANDLER [ceph_migrate : restart mgr] *********************************** changed: [np0005548785.localdomain] => {"changed": true, "cmd": ["podman", "run", "--rm", "--net=host", "--ipc=host", "--volume", "/home/tripleo-admin/ceph_client:/etc/ceph:z", "--entrypoint", "ceph", "registry.redhat.io/rhceph/rhceph-7-rhel9:latest", "--fsid", "1939e851-b10c-5c3b-9bb7-8e7f380233e8", "-c", "/etc/ceph/ceph.conf", "-k", "/etc/ceph/ceph.client.admin.keyring", "mgr", "fail"], "delta": "0:00:00.783048", "end": "2025-12-06 10:11:41.901741", "msg": "", "rc": 0, "start": "2025-12-06 10:11:41.118693", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [ceph_migrate : Install cephadm on all compute nodes] ********************* skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "groups['ComputeHCI'] is defined", "skip_reason": "Conditional result was False"} TASK [ceph_migrate : Force fail ceph mgr on first compute node] **************** skipping: [np0005548785.localdomain] => {"changed": false, "false_condition": "groups['ComputeHCI'] is defined", "skip_reason": "Conditional result was False"} PLAY RECAP ********************************************************************* np0005548785.localdomain : ok=243 changed=113 unreachable=0 failed=0 skipped=139 rescued=0 ignored=0