From asellars at vigilantnow.com Fri Jul 5 15:41:33 2019 From: asellars at vigilantnow.com (Allen Sellars) Date: Fri, 5 Jul 2019 21:41:33 +0000 Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 Message-ID: <37EE9C88-80C8-40A9-8302-9272A21C8550@vigilantnow.com> I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. I have no profile-* configs in the proposals directory. I?ve obscured FQDNs Stages 0-2 run fine with no failures. I see the following in stage 3: When I run salt-run state.orch ceph.stage.3 my salt-master return this: firewall : disabled apparmor : disabled subvolume : skipping DEV_ENV : True fsid : valid public_network : valid public_interface : valid cluster_network : valid cluster_interface : valid ip_version : valid monitors : valid mgrs : valid storage : valid storage_role : valid rgw : valid ganesha : valid master_role : valid time_server : valid fqdn : valid Found DriveGroup Calling dg.deploy on compound target I at roles:storage [ERROR ] {'out': 'highstate', 'ret': {'s1103': {'module_|-wait for osd processes_|-cephprocesses.wait_|-run': {'name': 'cephprocesses.wait', 'changes': {'ret': False}, 'comment': 'Module function cephprocesses.wait executed', 'result': False, '__sls__': 'ceph.processes.osd.default', '__run_num__': 0, 'start_time': '21:11:52.567915', 'duration': 933229.3, '__id__': 'wait for osd processes'}}}} admin_master: Name: populate scrape configs - Function: salt.state - Result: Changed Started: - 21:03:44.756802 Duration: 6245.346 ms Name: populate alertmanager peers - Function: salt.state - Result: Changed Started: - 21:03:51.002480 Duration: 2236.262 ms Name: fileserver.clear_file_list_cache - Function: salt.runner - Result: Changed Started: - 21:03:53.239063 Duration: 795.163 ms Name: install prometheus - Function: salt.state - Result: Clean Started: - 21:03:54.034548 Duration: 62470.849 ms Name: push scrape configs - Function: salt.state - Result: Changed Started: - 21:04:56.505708 Duration: 1903.123 ms Name: install alertmanager - Function: salt.state - Result: Clean Started: - 21:04:58.409127 Duration: 38233.69 ms Name: populate grafana config fragments - Function: salt.state - Result: Clean Started: - 21:05:36.643118 Duration: 2777.631 ms Name: install grafana - Function: salt.state - Result: Clean Started: - 21:05:39.420958 Duration: 109235.045 ms Name: time - Function: salt.state - Result: Changed Started: - 21:07:28.656309 Duration: 46245.898 ms Name: configuration check - Function: salt.state - Result: Clean Started: - 21:08:14.902504 Duration: 920.39 ms Name: create ceph.conf - Function: salt.state - Result: Changed Started: - 21:08:15.823194 Duration: 5960.673 ms Name: configuration - Function: salt.state - Result: Changed Started: - 21:08:21.784167 Duration: 2212.586 ms Name: admin - Function: salt.state - Result: Clean Started: - 21:08:23.996960 Duration: 962.78 ms Name: mgr keyrings - Function: salt.state - Result: Clean Started: - 21:08:24.960033 Duration: 1011.557 ms Name: monitors - Function: salt.state - Result: Changed Started: - 21:08:25.971888 Duration: 17081.809 ms Name: mgr auth - Function: salt.state - Result: Changed Started: - 21:08:43.054000 Duration: 5225.853 ms Name: mgrs - Function: salt.state - Result: Changed Started: - 21:08:48.280055 Duration: 44721.595 ms Name: install ca cert in mgr minions - Function: salt.state - Result: Changed Started: - 21:09:33.001952 Duration: 2575.651 ms Name: retry.cmd - Function: salt.function - Result: Changed Started: - 21:09:35.577911 Duration: 1095.445 ms Name: dashboard - Function: salt.state - Result: Changed Started: - 21:09:36.673648 Duration: 7548.494 ms Name: osd auth - Function: salt.state - Result: Changed Started: - 21:09:44.222446 Duration: 2545.84 ms Name: sysctl - Function: salt.state - Result: Changed Started: - 21:09:46.768599 Duration: 758.771 ms Name: set osd keyrings - Function: salt.state - Result: Clean Started: - 21:09:47.527655 Duration: 749.077 ms Name: disks.deploy - Function: salt.runner - Result: Changed Started: - 21:09:48.277007 Duration: 30133.606 ms Name: mgr tuned - Function: salt.state - Result: Changed Started: - 21:10:18.410965 Duration: 19570.008 ms Name: mon tuned - Function: salt.state - Result: Changed Started: - 21:10:37.981281 Duration: 18494.201 ms Name: osd tuned - Function: salt.state - Result: Changed Started: - 21:10:56.475786 Duration: 13570.12 ms Name: pools - Function: salt.state - Result: Changed Started: - 21:11:10.046234 Duration: 6984.189 ms Name: wait until mon2 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:17.030734 Duration: 7574.266 ms Name: check if mon processes are still running on mon2 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:24.605309 Duration: 992.112 ms Name: restarting mons on mon2 - Function: salt.state - Result: Clean Started: - 21:11:25.597732 Duration: 1626.034 ms Name: wait until mon3 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:27.224053 Duration: 6971.541 ms Name: check if mon processes are still running on mon3 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:34.195896 Duration: 994.762 ms Name: restarting mons on mon3 - Function: salt.state - Result: Clean Started: - 21:11:35.190958 Duration: 1620.595 ms Name: wait until mon1 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:36.811852 Duration: 7322.913 ms Name: check if mon processes are still running on mon1 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:44.135071 Duration: 988.726 ms Name: restarting mons on mon1 - Function: salt.state - Result: Clean Started: - 21:11:45.124101 Duration: 1625.383 ms Name: wait until mon2 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:11:46.749778 Duration: 6997.631 ms Name: check if mgr processes are still running on mon2 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:11:53.747726 Duration: 990.767 ms Name: restarting mgr on mon2 - Function: salt.state - Result: Clean Started: - 21:11:54.738787 Duration: 1666.851 ms Name: wait until mon3 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:11:56.405937 Duration: 6978.253 ms Name: check if mgr processes are still running on mon3 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:12:03.384521 Duration: 2233.584 ms Name: restarting mgr on mon3 - Function: salt.state - Result: Clean Started: - 21:12:05.618424 Duration: 1572.339 ms Name: wait until mon1 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:12:07.191059 Duration: 6980.657 ms Name: check if mgr processes are still running on mon1 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:12:14.172025 Duration: 992.372 ms Name: restarting mgr on mon1 - Function: salt.state - Result: Clean Started: - 21:12:15.164731 Duration: 1618.461 ms Name: wait until s1103 with role osd can be restarted - Function: salt.state - Result: Changed Started: - 21:12:16.783490 Duration: 6973.487 ms ---------- ID: check if osd processes are still running on s1103 after restarting osds Function: salt.state Result: False Comment: Run failed on minions: s1103 Started: 21:12:23.757280 Duration: 934163.423 ms Changes: s1103: ---------- ID: wait for osd processes Function: module.run Name: cephprocesses.wait Result: False Comment: Module function cephprocesses.wait executed Started: 21:11:52.567915 Duration: 933229.3 ms Changes: ---------- ret: False Summary for s1103 ------------ Succeeded: 0 (changed=1) Failed: 1 ------------ Total states run: 1 Total run time: 933.229 s Summary for admin_master ------------- Succeeded: 47 (changed=33) Failed: 1 When running salt-run disks.report I get this: salt-run disks.report Found DriveGroup Calling dg.report on compound target I at roles:storage |_ ---------- s1103: |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 185.00 GB Total LVs: 13 Size per LV: 14.23 GB Devices: /dev/sdbc Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdaa 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdae 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdai 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdam 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaq 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdau 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sday 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdc 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdg 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdk 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdo 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sds 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdw 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 371.00 GB Total LVs: 13 Size per LV: 28.54 GB Devices: /dev/sdbd Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdab 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaf 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaj 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdan 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdar 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdav 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaz 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdd 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdh 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdl 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdp 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdt 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdx 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 371.00 GB Total LVs: 13 Size per LV: 28.54 GB Devices: /dev/sdbe Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdac 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdag 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdak 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdao 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdas 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaw 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdba 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sde 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdi 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdm 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdq 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdu 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdy 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 371.00 GB Total LVs: 13 Size per LV: 28.54 GB Devices: /dev/sdbf Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdad 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdah 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdal 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdap 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdat 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdax 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdbb 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdf 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdj 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdn 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdr 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdv 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdz 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% Here?s my drive_groups.yml: drive_group_default: target: 'I at roles:storage' data_devices: size: '2TB:' rotational: 1 db_devices: rotational: 0 size: '180GB:' Here?s what I see on the minion itself: 2019-07-05 20:52:27,750 [salt.utils.decorators:613 ][WARNING ][65394] The function "module.run" is using its deprecated version and will expire in version "Sodium". 2019-07-05 21:07:26,314 [salt.utils.decorators:613 ][WARNING ][66129] The function "module.run" is using its deprecated version and will expire in version "Sodium". 2019-07-05 21:11:52,568 [salt.utils.decorators:613 ][WARNING ][70185] The function "module.run" is using its deprecated version and will expire in version "Sodium". 2019-07-05 21:11:52,692 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:11:55,801 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:01,899 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:11,001 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:23,104 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:38,210 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:56,321 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:13:17,437 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:13:41,557 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:14:08,676 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:14:38,801 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:15:11,930 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:15:48,031 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:16:27,128 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:17:09,232 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:17:54,371 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:18:42,510 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:19:33,653 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:20:27,799 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:21:24,951 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:22:25,107 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:23:25,270 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:24:25,425 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:25:25,582 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:26:25,737 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:27:25,796 [salt.loaded.ext.module.cephprocesses:389 ][ERROR ][70185] Timeout expired 2019-07-05 21:27:25,797 [salt.state :323 ][ERROR ][70185] {'ret': False} What am I missing here? The OSDs were deploying with deepsea and ceph 13 mimic and the storage profiles in profile-*. I seem to be missing something with the new drive_group setup Allen -------------- next part -------------- An HTML attachment was scrubbed... URL: From hunter86_bg at yahoo.com Fri Jul 5 16:01:20 2019 From: hunter86_bg at yahoo.com (Strahil) Date: Sat, 06 Jul 2019 01:01:20 +0300 Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 Message-ID: Hi Allen, I think that you need empty disks for deepsea to 'target' them. Can you wipe the partition's beginning, disk beginning and disk end ? Should be something like: for partition in /dev/sdX[0-9]* do dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct And then create a gpt partition table: sgdisk -Z --clear -g /dev/sdX Source: https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF Best Regards, Strahil NikolovOn Jul 6, 2019 00:41, Allen Sellars wrote: > > I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. > > ? > > I have no profile-* configs in the proposals directory. > > ? > > I?ve obscured FQDNs > > ? > > Stages 0-2 run fine with no failures. I see the following in stage 3: > > When I run salt-run state.orch ceph.stage.3 my salt-master return this: > > ? > > firewall???????????????? : disabled > > apparmor???????????????? : disabled > > subvolume??????????????? : skipping > > DEV_ENV????????????????? : True > > fsid???????????????????? : valid > > public_network?????????? : valid > > public_interface???????? : valid > > cluster_network????????? : valid > > cluster_interface??????? : valid > > ip_version?????????????? : valid > > monitors???????????????? : valid > > mgrs???????????????????? : valid > > storage????????????????? : valid > > storage_role???????????? : valid > > rgw????????????????????? : valid > > ganesha????????????????? : valid > > master_role????????????? : valid > > time_server????????????? : valid > > fqdn???????????????????? : valid > > ? > > Found DriveGroup > > Calling dg.deploy on compound target I at roles:storage > > ? > > ? > > ? > > ? > > ? > > [ERROR?? ] {'out': 'highstate', 'ret': {'s1103': {'module_|-wait for osd processes_|-cephprocesses.wait_|-run': {'name': 'cephprocesses.wait', 'changes': {'ret': False}, 'comment': 'Module function cephprocesses.wait executed', 'result': False, '__sls__': 'ceph.processes.osd.default', '__run_num__': 0, 'start_time': '21:11:52.567915', 'duration': 933229.3, '__id__': 'wait for osd processes'}}}} > > admin_master: > > ? Name: populate scrape configs - Function: salt.state - Result: Changed Started: - 21:03:44.756802 Duration: 6245.346 ms > > ? Name: populate alertmanager peers - Function: salt.state - Result: Changed Started: - 21:03:51.002480 Duration: 2236.262 ms > > ? Name: fileserver.clear_file_list_cache - Function: salt.runner - Result: Changed Started: - 21:03:53.239063 Duration: 795.163 ms > > ? Name: install prometheus - Function: salt.state - Result: Clean Started: - 21:03:54.034548 Duration: 62470.849 ms > > ? Name: push scrape configs - Function: salt.state - Result: Changed Started: - 21:04:56.505708 Duration: 1903.123 ms > > ? Name: install alertmanager - Function: salt.state - Result: Clean Started: - 21:04:58.409127 Duration: 38233.69 ms > > ? Name: populate grafana config fragments - Function: salt.state - Result: Clean Started: - 21:05:36.643118 Duration: 2777.631 ms > > ? Name: install grafana - Function: salt.state - Result: Clean Started: - 21:05:39.420958 Duration: 109235.045 ms > > ? Name: time - Function: salt.state - Result: Changed Started: - 21:07:28.656309 Duration: 46245.898 ms > > ? Name: configuration check - Function: salt.state - Result: Clean Started: - 21:08:14.902504 Duration: 920.39 ms > > ? Name: create ceph.conf - Function: salt.state - Result: Changed Started: - 21:08:15.823194 Duration: 5960.673 ms > > ? Name: configuration - Function: salt.state - Result: Changed Started: - 21:08:21.784167 Duration: 2212.586 ms > > ? Name: admin - Function: salt.state - Result: Clean Started: - 21:08:23.996960 Duration: 962.78 ms > > ? Name: mgr keyrings - Function: salt.state - Result: Clean Started: - 21:08:24.960033 Duration: 1011.557 ms > > ? Name: monitors - Function: salt.state - Result: Changed Started: - 21:08:25.971888 Duration: 17081.809 ms > > ? Name: mgr auth - Function: salt.state - Result: Changed Started: - 21:08:43.054000 Duration: 5225.853 ms > > ? Name: mgrs - Function: salt.state - Result: Changed Started: - 21:08:48.280055 Duration: 44721.595 ms > > ? Name: install ca cert in mgr minions - Function: salt.state - Result: Changed Started: - 21:09:33.001952 Duration: 2575.651 ms > > ? Name: retry.cmd - Function: salt.function - Result: Changed Started: - 21:09:35.577911 Duration: 1095.445 ms > > ? Name: dashboard - Function: salt.state - Result: Changed Started: - 21:09:36.673648 Duration: 7548.494 ms > > ? Name: osd auth - Function: salt.state - Result: Changed Started: - 21:09:44.222446 Duration: 2545.84 ms > > ? Name: sysctl - Function: salt.state - Result: Changed Started: - 21:09:46.768599 Duration: 758.771 ms > > ? Name: set osd keyrings - Function: salt.state - Result: Clean Started: - 21:09:47.527655 Duration: 749.077 ms > > ? Name: disks.deploy - Function: salt.runner - Result: Changed Started: - 21:09:48.277007 Duration: 30133.606 ms > > ? Name: mgr tuned - Function: salt.state - Result: Changed Started: - 21:10:18.410965 Duration: 19570.008 ms > > ? Name: mon tuned - Function: salt.state - Result: Changed Started: - 21:10:37.981281 Duration: 18494.201 ms > > ? Name: osd tuned - Function: salt.state - Result: Changed Started: - 21:10:56.475786 Duration: 13570.12 ms > > ? Name: pools - Function: salt.state - Result: Changed Started: - 21:11:10.046234 Duration: 6984.189 ms > > ? Name: wait until mon2 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:17.030734 Duration: 7574.266 ms > > ? Name: check if mon processes are still running on mon2 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:24.605309 Duration: 992.112 ms > > ? Name: restarting mons on mon2 - Function: salt.state - Result: Clean Started: - 21:11:25.597732 Duration: 1626.034 ms > > ? Name: wait until mon3 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:27.224053 Duration: 6971.541 ms > > ? Name: check if mon processes are still running on mon3 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:34.195896 Duration: 994.762 ms > > ? Name: restarting mons on mon3 - Function: salt.state - Result: Clean Started: - 21:11:35.190958 Duration: 1620.595 ms > > ? Name: wait until mon1 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:36.811852 Duration: 7322.913 ms > > ? Name: check if mon processes are still running on mon1 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:44.135071 Duration: 988.726 ms > > ? Name: restarting mons on mon1 - Function: salt.state - Result: Clean Started: - 21:11:45.124101 Duration: 1625.383 ms > > ? Name: wait until mon2 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:11:46.749778 Duration: 6997.631 ms > > ? Name: check if mgr processes are still running on mon2 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:11:53.747726 Duration: 990.767 ms > > ? Name: restarting mgr on mon2 - Function: salt.state - Result: Clean Started: - 21:11:54.738787 Duration: 1666.851 ms > > ? Name: wait until mon3 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:11:56.405937 Duration: 6978.253 ms > > ? Name: check if mgr processes are still running on mon3 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:12:03.384521 Duration: 2233.584 ms > > ? Name: restarting mgr on mon3 - Function: salt.state - Result: Clean Started: - 21:12:05.618424 Duration: 1572.339 ms > > ? Name: wait until mon1 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:12:07.191059 Duration: 6980.657 ms > > ? Name: check if mgr processes are still running on mon1 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:12:14.172025 Duration: 992.372 ms > > ? Name: restarting mgr on mon1 - Function: salt.state - Result: Clean Started: - 21:12:15.164731 Duration: 1618.461 ms > > ? Name: wait until s1103 with role osd can be restarted - Function: salt.state - Result: Changed Started: - 21:12:16.783490 Duration: 6973.487 ms > > ---------- > > ????????? ID: check if osd processes are still running on s1103 after restarting osds > > ??? Function: salt.state > > ????? Result: False > > ???? Comment: Run failed on minions: s1103 > > ???? Started: 21:12:23.757280 > > ??? Duration: 934163.423 ms > > ???? Changes: > > ????????????? s1103: > > ????????????? ---------- > > ??????????????????????? ID: wait for osd processes > > ????????????????? Function: module.run > > ????? ????????????????Name: cephprocesses.wait > > ??????????????????? Result: False > > ?????????????????? Comment: Module function cephprocesses.wait executed > > ?????????????????? Started: 21:11:52.567915 > > ????????????????? Duration: 933229.3 ms > > ?????????????????? Changes: > > ??????????????????????????? ---------- > > ??????????????????????????? ret: > > ??????????????????????????????? False > > ? > > ????????????? Summary for s1103 > > ????????????? ------------ > > ????????????? Succeeded: 0 (changed=1) > > ????????????? Failed:??? 1 > > ????????????? ------------ > > ????????????? Total states run:???? 1 > > ????????????? Total run time: 933.229 s > > ? > > Summary for admin_master > > ------------- > > Succeeded: 47 (changed=33) > > Failed:???? 1 > > ? > > ? > > ? > > ? > > When running salt-run disks.report I get this: > > salt-run disks.report > > Found DriveGroup > > Calling dg.report on compound target I at roles:storage > > |_ > > ? ---------- > > ? s1103: > > ????? |_ > > ??????? - 0 > > ??????? - > > ????????? Total OSDs: 13 > > ? > > ????????? Solid State VG: > > ??????????? Targets:?? block.db????????????????? Total size: 185.00 GB > > ??????????? Total LVs: 13??????????????????????? Size per LV: 14.23 GB > > ??????????? Devices:?? /dev/sdbc > > ? > > ??????????? Type??????????? Path???????????????????????????????????????? ???????????LV Size???????? % of device > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdaa?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv ??????????????????????????????????????????????14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdae?????????????????????????????????????????? ????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdai?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdam?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB?????? ?7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdaq?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdau?????????????????????????????????????? ????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ?????????? ?[data]????????? /dev/sday?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdc??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB?? ?????7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdg??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdk??????????????????????????????????? ????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdo??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv ??????????????????????????????????????????????14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sds??????????????????????????????????????????? ????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdw??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 14.23 GB??????? 7% > > ????? |_ > > ??????? - 0 > > ??????? - > > ????????? Total OSDs: 13 > > ? > > ??? ??????Solid State VG: > > ??????????? Targets:?? block.db????????????????? Total size: 371.00 GB > > ??????????? Total LVs: 13??????????????????????? Size per LV: 28.54 GB > > ??????????? Devices:?? /dev/sdbd > > ? > > ??????????? Type??????????? Path????????????????????????? ??????????????????????????LV Size???????? % of device > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdab?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]??????? ??/dev/sdaf?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdaj?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdan???? ??????????????????????????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdar?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdav?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv??????????????????????? ???????????????????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdaz?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdd? ??????????????????????????????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdh??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdl??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv??????????????????? ???????????????????????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdp??????????????????????????????????????????????? 5.46 TB??????? ?100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdt??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdx??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????? |_ > > ??????? - 0 > > ??????? - > > ????????? Total OSDs: 13 > > ? > > ????????? Solid State VG: > > ??????????? Targets:?? block.db????????????????? Total size: 371.00 GB > > ??????????? Total LVs: 13??????????????????????? Size per LV: 28.54 GB > > ??????????? Devices:?? /dev/sdbe > > ? > > ??????????? Type??????????? Path??????????????????????????????????????????????????? LV Size???????? % of device > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdac?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????? ????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdag?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ?????????? ?[block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdak??????????????? ???????????????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdao?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdas?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????? ????????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdaw?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdba?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sde??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdi???????????????????????? ???????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdm??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdq??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????? ????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdu??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdy???????????????????? ???????????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????? |_ > > ??????? - 0 > > ??????? - > > ????????? Total OSDs: 13 > > ? > > ????????? Solid State VG: > > ??????????? Targets:?? block.db????????????????? Total size: 371.00 GB > > ??????????? Total LVs: 13??????????????????????? Size per LV: 28.54 GB > > ??????????? Devices:?? /dev/sdbf > > ? > > ??????????? Type??????????? Path??????????????????????????????????????????????? ????LV Size???????? % of device > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdad?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ????? ??????[block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdah?????????? ????????????????????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdal?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdap?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv????????????????????????????? ?????????????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdat?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ? ??????????[block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdax?????? ????????????????????????????????????????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdbb?????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdf??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv????????????????????????? ?????????????????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdj??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv ??????????????????????????????????????????????28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdn??????????????????????????????????????????? ????5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdr??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdv??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB?????? ?7% > > ????????? ---------------------------------------------------------------------------------------------------- > > ??????????? [data]????????? /dev/sdz??????????????????????????????????????????????? 5.46 TB???????? 100.0% > > ??????????? [block.db]????? vg: vg/lv?????????????????????????????????????????????? 28.54 GB??????? 7% > > ? > > ? > > Here?s my drive_groups.yml: > > ? > > drive_group_default: > > ? target: 'I at roles:storage' > > ? data_devices: > > ??? size: '2TB:' > > ??? rotational: 1 > > ? db_devices: > > ??? rotational: 0 > > ??? size: '180GB:' > > ? > > ? > > ? > > ? > > Here?s what I see on the minion itself: > > ? > > 2019-07-05 20:52:27,750 [salt.utils.decorators:613 ][WARNING ][65394] The function "module.run" is using its deprecated version and will expire in version "Sodium". > > 2019-07-05 21:07:26,314 [salt.utils.decorators:613 ][WARNING ][66129] The function "module.run" is using its deprecated version and will expire in version "Sodium". > > 2019-07-05 21:11:52,568 [salt.utils.decorators:613 ][WARNING ][70185] The function "module.run" is using its deprecated version and will expire in version "Sodium". > > 2019-07-05 21:11:52,692 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:11:55,801 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:12:01,899 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:12:11,001 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:12:23,104 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:12:38,210 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:12:56,321 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:13:17,437 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:13:41,557 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:14:08,676 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:14:38,801 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:15:11,930 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:15:48,031 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:16:27,128 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:17:09,232 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:17:54,371 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:18:42,510 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:19:33,653 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:20:27,799 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:21:24,951 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ??][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:22:25,107 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:23:25,270 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:24:25,425 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:25:25,582 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:26:25,737 [salt.loaded.ext.module.cephprocesses:252 ][ERROR?? ][70185] ERROR: process ceph-osd for role storage is not running > > 2019-07-05 21:27:25,796 [salt.loaded.ext.module.cephprocesses:389 ][ERROR?? ][70185] Timeout expired > > 2019-07-05 21:27:25,797 [salt.state?????? :323 ][ERROR?? ][70185] {'ret': False} > > ? > > What am I missing here? The OSDs were deploying with deepsea and ceph 13 mimic and the storage profiles in profile-*. I seem to be missing something with the new drive_group setup > > ? > > ? > > Allen > > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From asellars at vigilantnow.com Sat Jul 6 05:24:36 2019 From: asellars at vigilantnow.com (Allen Sellars) Date: Sat, 6 Jul 2019 11:24:36 +0000 Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 In-Reply-To: References: Message-ID: <1CDEE890-5D48-4CC4-85A2-1A0E9AD5C740@vigilantnow.com> gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use. I?ll go through zeroing them out with this process and report back. Thanks Allen Sellars asellars at vigilantnow.com -Sent from my iPhone On Jul 5, 2019, at 18:04, Strahil > wrote: Hi Allen, I think that you need empty disks for deepsea to 'target' them. Can you wipe the partition's beginning, disk beginning and disk end ? Should be something like: for partition in /dev/sdX[0-9]* do dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct And then create a gpt partition table: sgdisk -Z --clear -g /dev/sdX Source: https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF Best Regards, Strahil Nikolov On Jul 6, 2019 00:41, Allen Sellars > wrote: I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. I have no profile-* configs in the proposals directory. I?ve obscured FQDNs Stages 0-2 run fine with no failures. I see the following in stage 3: When I run salt-run state.orch ceph.stage.3 my salt-master return this: firewall : disabled apparmor : disabled subvolume : skipping DEV_ENV : True fsid : valid public_network : valid public_interface : valid cluster_network : valid cluster_interface : valid ip_version : valid monitors : valid mgrs : valid storage : valid storage_role : valid rgw : valid ganesha : valid master_role : valid time_server : valid fqdn : valid Found DriveGroup Calling dg.deploy on compound target I at roles:storage [ERROR ] {'out': 'highstate', 'ret': {'s1103': {'module_|-wait for osd processes_|-cephprocesses.wait_|-run': {'name': 'cephprocesses.wait', 'changes': {'ret': False}, 'comment': 'Module function cephprocesses.wait executed', 'result': False, '__sls__': 'ceph.processes.osd.default', '__run_num__': 0, 'start_time': '21:11:52.567915', 'duration': 933229.3, '__id__': 'wait for osd processes'}}}} admin_master: Name: populate scrape configs - Function: salt.state - Result: Changed Started: - 21:03:44.756802 Duration: 6245.346 ms Name: populate alertmanager peers - Function: salt.state - Result: Changed Started: - 21:03:51.002480 Duration: 2236.262 ms Name: fileserver.clear_file_list_cache - Function: salt.runner - Result: Changed Started: - 21:03:53.239063 Duration: 795.163 ms Name: install prometheus - Function: salt.state - Result: Clean Started: - 21:03:54.034548 Duration: 62470.849 ms Name: push scrape configs - Function: salt.state - Result: Changed Started: - 21:04:56.505708 Duration: 1903.123 ms Name: install alertmanager - Function: salt.state - Result: Clean Started: - 21:04:58.409127 Duration: 38233.69 ms Name: populate grafana config fragments - Function: salt.state - Result: Clean Started: - 21:05:36.643118 Duration: 2777.631 ms Name: install grafana - Function: salt.state - Result: Clean Started: - 21:05:39.420958 Duration: 109235.045 ms Name: time - Function: salt.state - Result: Changed Started: - 21:07:28.656309 Duration: 46245.898 ms Name: configuration check - Function: salt.state - Result: Clean Started: - 21:08:14.902504 Duration: 920.39 ms Name: create ceph.conf - Function: salt.state - Result: Changed Started: - 21:08:15.823194 Duration: 5960.673 ms Name: configuration - Function: salt.state - Result: Changed Started: - 21:08:21.784167 Duration: 2212.586 ms Name: admin - Function: salt.state - Result: Clean Started: - 21:08:23.996960 Duration: 962.78 ms Name: mgr keyrings - Function: salt.state - Result: Clean Started: - 21:08:24.960033 Duration: 1011.557 ms Name: monitors - Function: salt.state - Result: Changed Started: - 21:08:25.971888 Duration: 17081.809 ms Name: mgr auth - Function: salt.state - Result: Changed Started: - 21:08:43.054000 Duration: 5225.853 ms Name: mgrs - Function: salt.state - Result: Changed Started: - 21:08:48.280055 Duration: 44721.595 ms Name: install ca cert in mgr minions - Function: salt.state - Result: Changed Started: - 21:09:33.001952 Duration: 2575.651 ms Name: retry.cmd - Function: salt.function - Result: Changed Started: - 21:09:35.577911 Duration: 1095.445 ms Name: dashboard - Function: salt.state - Result: Changed Started: - 21:09:36.673648 Duration: 7548.494 ms Name: osd auth - Function: salt.state - Result: Changed Started: - 21:09:44.222446 Duration: 2545.84 ms Name: sysctl - Function: salt.state - Result: Changed Started: - 21:09:46.768599 Duration: 758.771 ms Name: set osd keyrings - Function: salt.state - Result: Clean Started: - 21:09:47.527655 Duration: 749.077 ms Name: disks.deploy - Function: salt.runner - Result: Changed Started: - 21:09:48.277007 Duration: 30133.606 ms Name: mgr tuned - Function: salt.state - Result: Changed Started: - 21:10:18.410965 Duration: 19570.008 ms Name: mon tuned - Function: salt.state - Result: Changed Started: - 21:10:37.981281 Duration: 18494.201 ms Name: osd tuned - Function: salt.state - Result: Changed Started: - 21:10:56.475786 Duration: 13570.12 ms Name: pools - Function: salt.state - Result: Changed Started: - 21:11:10.046234 Duration: 6984.189 ms Name: wait until mon2 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:17.030734 Duration: 7574.266 ms Name: check if mon processes are still running on mon2 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:24.605309 Duration: 992.112 ms Name: restarting mons on mon2 - Function: salt.state - Result: Clean Started: - 21:11:25.597732 Duration: 1626.034 ms Name: wait until mon3 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:27.224053 Duration: 6971.541 ms Name: check if mon processes are still running on mon3 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:34.195896 Duration: 994.762 ms Name: restarting mons on mon3 - Function: salt.state - Result: Clean Started: - 21:11:35.190958 Duration: 1620.595 ms Name: wait until mon1 with role mon can be restarted - Function: salt.state - Result: Changed Started: - 21:11:36.811852 Duration: 7322.913 ms Name: check if mon processes are still running on mon1 after restarting mons - Function: salt.state - Result: Changed Started: - 21:11:44.135071 Duration: 988.726 ms Name: restarting mons on mon1 - Function: salt.state - Result: Clean Started: - 21:11:45.124101 Duration: 1625.383 ms Name: wait until mon2 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:11:46.749778 Duration: 6997.631 ms Name: check if mgr processes are still running on mon2 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:11:53.747726 Duration: 990.767 ms Name: restarting mgr on mon2 - Function: salt.state - Result: Clean Started: - 21:11:54.738787 Duration: 1666.851 ms Name: wait until mon3 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:11:56.405937 Duration: 6978.253 ms Name: check if mgr processes are still running on mon3 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:12:03.384521 Duration: 2233.584 ms Name: restarting mgr on mon3 - Function: salt.state - Result: Clean Started: - 21:12:05.618424 Duration: 1572.339 ms Name: wait until mon1 with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 21:12:07.191059 Duration: 6980.657 ms Name: check if mgr processes are still running on mon1 after restarting mgrs - Function: salt.state - Result: Changed Started: - 21:12:14.172025 Duration: 992.372 ms Name: restarting mgr on mon1 - Function: salt.state - Result: Clean Started: - 21:12:15.164731 Duration: 1618.461 ms Name: wait until s1103 with role osd can be restarted - Function: salt.state - Result: Changed Started: - 21:12:16.783490 Duration: 6973.487 ms ---------- ID: check if osd processes are still running on s1103 after restarting osds Function: salt.state Result: False Comment: Run failed on minions: s1103 Started: 21:12:23.757280 Duration: 934163.423 ms Changes: s1103: ---------- ID: wait for osd processes Function: module.run Name: cephprocesses.wait Result: False Comment: Module function cephprocesses.wait executed Started: 21:11:52.567915 Duration: 933229.3 ms Changes: ---------- ret: False Summary for s1103 ------------ Succeeded: 0 (changed=1) Failed: 1 ------------ Total states run: 1 Total run time: 933.229 s Summary for admin_master ------------- Succeeded: 47 (changed=33) Failed: 1 When running salt-run disks.report I get this: salt-run disks.report Found DriveGroup Calling dg.report on compound target I at roles:storage |_ ---------- s1103: |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 185.00 GB Total LVs: 13 Size per LV: 14.23 GB Devices: /dev/sdbc Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdaa 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdae 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdai 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdam 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaq 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdau 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sday 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdc 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdg 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdk 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdo 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sds 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdw 5.46 TB 100.0% [block.db] vg: vg/lv 14.23 GB 7% |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 371.00 GB Total LVs: 13 Size per LV: 28.54 GB Devices: /dev/sdbd Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdab 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaf 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaj 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdan 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdar 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdav 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaz 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdd 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdh 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdl 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdp 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdt 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdx 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 371.00 GB Total LVs: 13 Size per LV: 28.54 GB Devices: /dev/sdbe Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdac 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdag 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdak 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdao 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdas 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdaw 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdba 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sde 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdi 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdm 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdq 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdu 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdy 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% |_ - 0 - Total OSDs: 13 Solid State VG: Targets: block.db Total size: 371.00 GB Total LVs: 13 Size per LV: 28.54 GB Devices: /dev/sdbf Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdad 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdah 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdal 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdap 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdat 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdax 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdbb 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdf 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdj 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdn 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdr 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdv 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% ---------------------------------------------------------------------------------------------------- [data] /dev/sdz 5.46 TB 100.0% [block.db] vg: vg/lv 28.54 GB 7% Here?s my drive_groups.yml: drive_group_default: target: 'I at roles:storage' data_devices: size: '2TB:' rotational: 1 db_devices: rotational: 0 size: '180GB:' Here?s what I see on the minion itself: 2019-07-05 20:52:27,750 [salt.utils.decorators:613 ][WARNING ][65394] The function "module.run" is using its deprecated version and will expire in version "Sodium". 2019-07-05 21:07:26,314 [salt.utils.decorators:613 ][WARNING ][66129] The function "module.run" is using its deprecated version and will expire in version "Sodium". 2019-07-05 21:11:52,568 [salt.utils.decorators:613 ][WARNING ][70185] The function "module.run" is using its deprecated version and will expire in version "Sodium". 2019-07-05 21:11:52,692 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:11:55,801 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:01,899 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:11,001 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:23,104 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:38,210 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:12:56,321 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:13:17,437 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:13:41,557 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:14:08,676 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:14:38,801 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:15:11,930 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:15:48,031 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:16:27,128 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:17:09,232 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:17:54,371 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:18:42,510 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:19:33,653 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:20:27,799 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:21:24,951 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:22:25,107 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:23:25,270 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:24:25,425 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:25:25,582 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:26:25,737 [salt.loaded.ext.module.cephprocesses:252 ][ERROR ][70185] ERROR: process ceph-osd for role storage is not running 2019-07-05 21:27:25,796 [salt.loaded.ext.module.cephprocesses:389 ][ERROR ][70185] Timeout expired 2019-07-05 21:27:25,797 [salt.state :323 ][ERROR ][70185] {'ret': False} What am I missing here? The OSDs were deploying with deepsea and ceph 13 mimic and the storage profiles in profile-*. I seem to be missing something with the new drive_group setup Allen -------------- next part -------------- An HTML attachment was scrubbed... URL: From hunter86_bg at yahoo.com Sat Jul 6 07:16:41 2019 From: hunter86_bg at yahoo.com (Strahil) Date: Sat, 06 Jul 2019 16:16:41 +0300 Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 Message-ID: Also 1 more thing - check if ceph-osd at X.service has failed any reasons behind that. Best Regards, Strahil NikolovOn Jul 6, 2019 14:24, Allen Sellars wrote: > > gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use. > > I?ll go through zeroing them out with this process and report back. > > Thanks > > Allen Sellars > asellars at vigilantnow.com > > -Sent from my iPhone > > On Jul 5, 2019, at 18:04, Strahil wrote: > >> Hi Allen, >> >> I think that you need empty disks for deepsea to 'target' them. >> >> Can you wipe the partition's beginning, disk beginning and disk end ? >> >> Should be something like: >> >> for partition in /dev/sdX[0-9]* >> do? >> dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done? >> >> dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct >> >> dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ >> seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct >> >> And then create a gpt partition table: >> >> sgdisk -Z --clear -g /dev/sdX >> >> Source: https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF >> >> Best Regards, >> Strahil Nikolov >> >> On Jul 6, 2019 00:41, Allen Sellars wrote: >>> >>> I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. >>> >>> ? >>> >>> I have no profile-* configs in the proposals directory. >>> >>> ? >>> >>> I?ve obscured FQDNs >>> >>> ? >>> >>> Stages 0-2 run fine with no failures. I see the following in stage 3: >>> >>> When I run salt-run state.orch ceph.stage.3 my salt-master return this: >>> >>> ? >>> >>> firewall???????????????? : disabled >>> >>> apparmor???????????????? : disabled >>> >>> subvolume??????????????? : skipping >>> >>> DEV_ENV????????????????? : True -------------- next part -------------- An HTML attachment was scrubbed... URL: From asellars at vigilantnow.com Mon Jul 8 11:15:43 2019 From: asellars at vigilantnow.com (Allen Sellars) Date: Mon, 8 Jul 2019 17:15:43 +0000 Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 In-Reply-To: References: Message-ID: <5A785A61-1667-479B-88B3-25FC4F651C44@vigilantnow.com> So I?ve run through the documented disk reset process that you attached, and rebooted the storage server. I?m still getting the same error and issue running stage 3. I adjusted the dd bs settings to accommodate the disks being 4k block size. Is there somewhere that may give me more verbose log info on the specific task that?s failing, or a way that I can try running that particular pillar by hand and try debugging the steps? I?m also open to other troubleshooting suggestions. I was able to get everything deployed on these OSDs with deepsea and ceph mimic, so I know the configuration was at least supported by the automation at some point Allen From: Strahil Date: Saturday, July 6, 2019 at 9:16 AM To: Allen Sellars Cc: deepsea-users Subject: Re: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 Also 1 more thing - check if ceph-osd at X.service has failed any reasons behind that. Best Regards, Strahil Nikolov On Jul 6, 2019 14:24, Allen Sellars wrote: gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use. I?ll go through zeroing them out with this process and report back. Thanks Allen Sellars asellars at vigilantnow.com -Sent from my iPhone On Jul 5, 2019, at 18:04, Strahil > wrote: Hi Allen, I think that you need empty disks for deepsea to 'target' them. Can you wipe the partition's beginning, disk beginning and disk end ? Should be something like: for partition in /dev/sdX[0-9]* do dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct And then create a gpt partition table: sgdisk -Z --clear -g /dev/sdX Source: https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF Best Regards, Strahil Nikolov On Jul 6, 2019 00:41, Allen Sellars > wrote: I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. I have no profile-* configs in the proposals directory. I?ve obscured FQDNs Stages 0-2 run fine with no failures. I see the following in stage 3: When I run salt-run state.orch ceph.stage.3 my salt-master return this: firewall : disabled apparmor : disabled subvolume : skipping DEV_ENV : True -------------- next part -------------- An HTML attachment was scrubbed... URL: From hunter86_bg at yahoo.com Mon Jul 8 11:37:28 2019 From: hunter86_bg at yahoo.com (Strahil Nikolov) Date: Mon, 8 Jul 2019 17:37:28 +0000 (UTC) Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 In-Reply-To: <5A785A61-1667-479B-88B3-25FC4F651C44@vigilantnow.com> References: <5A785A61-1667-479B-88B3-25FC4F651C44@vigilantnow.com> Message-ID: <473428644.2855891.1562607448192@mail.yahoo.com> Hi Allen, You can run : 'deepsea stage run ceph.stage.3' which will show you the exact step that fails. Another approach is to use something like: 'salt-run -l debug state.orch ceph.stage.3 | tee -a /somelog' Last time I checked the error - it seems that it had issues starting the OSD services. How much resources do you have on the nodes? I mean , do you use VMs with too few RAM ? Best Regards, Strahil Nikolov ? ??????????, 8 ??? 2019 ?., 13:15:48 ?. ???????-4, Allen Sellars ??????: So I?ve run through the documented disk reset process that you attached, and rebooted the storage server. I?m still getting the same error and issue running stage 3. I adjusted the dd bs settings to accommodate the disks being 4k block size. ? Is there somewhere that may give me more verbose log info on the specific task that?s failing, or a way that I can try running that particular pillar by hand and try debugging the steps? ? I?m also open to other troubleshooting suggestions. I was able to get everything deployed on these OSDs with deepsea and ceph mimic, so I know the configuration ?was at least supported by the automation at some point ? Allen ? ? From: Strahil Date: Saturday, July 6, 2019 at 9:16 AMTo: Allen Sellars Cc: deepsea-users Subject: Re: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 ? Also 1 more thing - check if ceph-osd at X.service? has failed? any reasons behind that. Best Regards, Strahil Nikolov On Jul 6, 2019 14:24, Allen Sellars wrote: >?? >?? > gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use. > >?? >??? > > >?? > I?ll go through zeroing them out with this process and report back. > > >?? >??? > > >?? > Thanks > >?? >?? > Allen Sellars > > >?? > asellars at vigilantnow.com > > >?? >??? > > > -Sent from my iPhone > > >?? > > On Jul 5, 2019, at 18:04, Strahil wrote: > > >>?? >>?? >> Hi Allen, >> >> I think that you need empty disks for deepsea to 'target' them. >> >> Can you wipe the partition's beginning, disk beginning and disk end ? >> >> Should be something like: >> >> for partition in /dev/sdX[0-9]* >> do? >> dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done? >> >> dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct >> >> dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ >> seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct >> >> And then create a gpt partition table: >> >> sgdisk -Z --clear -g /dev/sdX >> >> Source:??https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF >> >> Best Regards, >> Strahil Nikolov >> >>?? >> On Jul 6, 2019 00:41, Allen Sellars wrote: >> >>>?? >>>?? >>>?? >>> I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. >>> >>> ? >>> >>> I have no profile-* configs in the proposals directory. >>> >>> ? >>> >>> I?ve obscured FQDNs >>> >>> ? >>> >>> Stages 0-2 run fine with no failures. I see the following in stage 3: >>> >>> When I run salt-run state.orch ceph.stage.3 my salt-master return this: >>> >>> ? >>> >>> firewall???????????????? : disabled >>> >>> apparmor???????????????? : disabled >>> >>> subvolume??????????????? : skipping >>> >>> DEV_ENV????????????????? : True >>> >>> >>> >> >> >> > > > From asellars at vigilantnow.com Tue Jul 9 08:03:48 2019 From: asellars at vigilantnow.com (Allen Sellars) Date: Tue, 9 Jul 2019 14:03:48 +0000 Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 In-Reply-To: <473428644.2855891.1562607448192@mail.yahoo.com> References: <5A785A61-1667-479B-88B3-25FC4F651C44@vigilantnow.com> <473428644.2855891.1562607448192@mail.yahoo.com> Message-ID: <89B336B7-90AD-43A9-9236-22B403B5308D@vigilantnow.com> Strahil, The storage node has 256GB of ram and it's a dedicated physical host. I was able to get the disks to deploy after ONLY running the "dd if=/dev/zero of=/dev/sdX bs=4k count=34 oflag=direct" Note that I modified the block size to accommodate the 4k block size of my dsisks. After zeroing the first 34 4k sectors again, and rerunning stage 3, the OSDs deployed. Running all of the listed steps resulted in a prepped GPT partition on the drives, which deepsea/salt complained about. Everything is now deployed, and my cluster and dashboard are up and healthy. Thanks for the guidance! Allen ?On 7/8/19, 1:37 PM, "Strahil Nikolov" wrote: Hi Allen, You can run : 'deepsea stage run ceph.stage.3' which will show you the exact step that fails. Another approach is to use something like: 'salt-run -l debug state.orch ceph.stage.3 | tee -a /somelog' Last time I checked the error - it seems that it had issues starting the OSD services. How much resources do you have on the nodes? I mean , do you use VMs with too few RAM ? Best Regards, Strahil Nikolov ? ??????????, 8 ??? 2019 ?., 13:15:48 ?. ???????-4, Allen Sellars ??????: So I?ve run through the documented disk reset process that you attached, and rebooted the storage server. I?m still getting the same error and issue running stage 3. I adjusted the dd bs settings to accommodate the disks being 4k block size. Is there somewhere that may give me more verbose log info on the specific task that?s failing, or a way that I can try running that particular pillar by hand and try debugging the steps? I?m also open to other troubleshooting suggestions. I was able to get everything deployed on these OSDs with deepsea and ceph mimic, so I know the configuration was at least supported by the automation at some point Allen From: Strahil Date: Saturday, July 6, 2019 at 9:16 AMTo: Allen Sellars Cc: deepsea-users Subject: Re: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 Also 1 more thing - check if ceph-osd at X.service has failed any reasons behind that. Best Regards, Strahil Nikolov On Jul 6, 2019 14:24, Allen Sellars wrote: > > > gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use. > > > > > > > I?ll go through zeroing them out with this process and report back. > > > > > > > > Thanks > > > > Allen Sellars > > > > asellars at vigilantnow.com > > > > > > > -Sent from my iPhone > > > > > On Jul 5, 2019, at 18:04, Strahil wrote: > > >> >> >> Hi Allen, >> >> I think that you need empty disks for deepsea to 'target' them. >> >> Can you wipe the partition's beginning, disk beginning and disk end ? >> >> Should be something like: >> >> for partition in /dev/sdX[0-9]* >> do >> dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done >> >> dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct >> >> dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ >> seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct >> >> And then create a gpt partition table: >> >> sgdisk -Z --clear -g /dev/sdX >> >> Source: https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF >> >> Best Regards, >> Strahil Nikolov >> >> >> On Jul 6, 2019 00:41, Allen Sellars wrote: >> >>> >>> >>> >>> I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. >>> >>> >>> >>> I have no profile-* configs in the proposals directory. >>> >>> >>> >>> I?ve obscured FQDNs >>> >>> >>> >>> Stages 0-2 run fine with no failures. I see the following in stage 3: >>> >>> When I run salt-run state.orch ceph.stage.3 my salt-master return this: >>> >>> >>> >>> firewall : disabled >>> >>> apparmor : disabled >>> >>> subvolume : skipping >>> >>> DEV_ENV : True >>> >>> >>> >> >> >> > > > From hunter86_bg at yahoo.com Tue Jul 9 08:11:23 2019 From: hunter86_bg at yahoo.com (Strahil Nikolov) Date: Tue, 9 Jul 2019 14:11:23 +0000 (UTC) Subject: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 In-Reply-To: <89B336B7-90AD-43A9-9236-22B403B5308D@vigilantnow.com> References: <5A785A61-1667-479B-88B3-25FC4F651C44@vigilantnow.com> <473428644.2855891.1562607448192@mail.yahoo.com> <89B336B7-90AD-43A9-9236-22B403B5308D@vigilantnow.com> Message-ID: <603512890.3130368.1562681483824@mail.yahoo.com> Thanks, it's nice to know that they should be completely wiped. Best Regards, Strahil Nikolov ? ???????, 9 ??? 2019 ?., 10:03:59 ?. ???????-4, Allen Sellars ??????: Strahil, ??? The storage node has 256GB of ram and it's a dedicated physical host. I was able to get the disks to deploy after ONLY running the "dd if=/dev/zero of=/dev/sdX bs=4k count=34 oflag=direct" Note that I modified the block size to accommodate the 4k block size of my dsisks. After zeroing the first 34 4k sectors again, and rerunning stage 3, the OSDs deployed. Running all of the listed steps resulted in a prepped GPT partition on the drives, which deepsea/salt complained about. Everything is now deployed, and my cluster and dashboard are up and healthy. Thanks for the guidance! Allen ?On 7/8/19, 1:37 PM, "Strahil Nikolov" wrote: ? ? Hi Allen, ? ? You can run : 'deepsea stage run ceph.stage.3' which will show you the exact step that fails. ? ? Another approach is to use something like: 'salt-run -l debug state.orch ceph.stage.3 | tee -a /somelog' ? ? ? ? Last time I checked the error - it seems that it had issues starting the OSD services. ? ? How much resources do you have on the nodes? I mean , do you use VMs with too few RAM ? ? ? ? ? Best Regards, ? ? Strahil Nikolov ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??????????, 8 ??? 2019 ?., 13:15:48 ?. ???????-4, Allen Sellars ??????: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? So I?ve run through the documented disk reset process that you attached, and rebooted the storage server. I?m still getting the same error and issue running stage 3. I adjusted the dd bs settings to accommodate the disks being 4k block size. ? ? ? ? ? ? ? ? Is there somewhere that may give me more verbose log info on the specific task that?s failing, or a way that I can try running that particular pillar by hand and try debugging the steps? ? ? ? ? ? ? ? ? I?m also open to other troubleshooting suggestions. I was able to get everything deployed on these OSDs with deepsea and ceph mimic, so I know the configuration? was at least supported by the automation at some point ? ? ? ? ? ? ? ? ? ? Allen ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? From: Strahil Date: Saturday, July 6, 2019 at 9:16 AMTo: Allen Sellars Cc: deepsea-users Subject: Re: [Deepsea-users] Deepsea fails to deploy OSDs in stage 3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? Also 1 more thing - check if ceph-osd at X.service? has failed? any reasons behind that. ? ? ? ? Best Regards, ? ? Strahil Nikolov ? ? ? ? ? ? On Jul 6, 2019 14:24, Allen Sellars wrote: ? ? ? ? >? ? ? >? ? ? > gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use. ? ? > ? ? >? ? ? >? ? ? > ? ? > ? ? >? ? ? > I?ll go through zeroing them out with this process and report back. ? ? > ? ? > ? ? >? ? ? >? ? ? > ? ? > ? ? >? ? ? > Thanks ? ? > ? ? >? ? ? >? ? ? > Allen Sellars ? ? > ? ? > ? ? >? ? ? > asellars at vigilantnow.com ? ? > ? ? > ? ? >? ? ? >? ? ? > ? ? > ? ? > -Sent from my iPhone ? ? > ? ? > ? ? >? ? ? > ? ? > On Jul 5, 2019, at 18:04, Strahil wrote: ? ? > ? ? > ? ? >>? ? ? >>? ? ? >> Hi Allen, ? ? >> ? ? >> I think that you need empty disks for deepsea to 'target' them. ? ? >> ? ? >> Can you wipe the partition's beginning, disk beginning and disk end ? ? ? >> ? ? >> Should be something like: ? ? >> ? ? >> for partition in /dev/sdX[0-9]* ? ? >> do? ? ? >> dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done? ? ? >> ? ? >> dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct ? ? >> ? ? >> dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ ? ? >> seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct ? ? >> ? ? >> And then create a gpt partition table: ? ? >> ? ? >> sgdisk -Z --clear -g /dev/sdX ? ? >> ? ? >> Source:? https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF ? ? >> ? ? >> Best Regards, ? ? >> Strahil Nikolov ? ? >> ? ? >>? ? ? >> On Jul 6, 2019 00:41, Allen Sellars wrote: ? ? >> ? ? >>>? ? ? >>>? ? ? >>>? ? ? >>> I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks. ? ? >>> ? ? >>>? ? ? >>> ? ? >>> I have no profile-* configs in the proposals directory. ? ? >>> ? ? >>>? ? ? >>> ? ? >>> I?ve obscured FQDNs ? ? >>> ? ? >>>? ? ? >>> ? ? >>> Stages 0-2 run fine with no failures. I see the following in stage 3: ? ? >>> ? ? >>> When I run salt-run state.orch ceph.stage.3 my salt-master return this: ? ? >>> ? ? >>>? ? ? >>> ? ? >>> firewall? ? ? ? ? ? ? ? : disabled ? ? >>> ? ? >>> apparmor? ? ? ? ? ? ? ? : disabled ? ? >>> ? ? >>> subvolume? ? ? ? ? ? ? ? : skipping ? ? >>> ? ? >>> DEV_ENV? ? ? ? ? ? ? ? ? : True ? ? >>> ? ? >>> ? ? >>> ? ? >> ? ? >> ? ? >> ? ? > ? ? > ? ? > ? ? ? ? ? ? ? ?