<p dir="ltr">Also 1 more thing - check if ceph-osd@X.service  has failed  any reasons behind that.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">On Jul 6, 2019 14:24, Allen Sellars <asellars@vigilantnow.com> wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">



<div>
gdisk was repeating no MVR and no GPT partitions, so I assumed they were safe to use.
<div><br />
</div>
<div>I’ll go through zeroing them out with this process and report back.</div>
<div><br />
</div>
<div>Thanks<br />
<br />
<div dir="ltr">
<div>Allen Sellars</div>
<div><a href="mailto:asellars@vigilantnow.com">asellars@vigilantnow.com</a></div>
<div><br />
</div>
-Sent from my iPhone</div>
<div dir="ltr"><br />
On Jul 5, 2019, at 18:04, Strahil <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br />
<br />
</div>
<blockquote>
<div dir="ltr">
<p dir="ltr">Hi Allen,</p>
<p dir="ltr">I think that you need empty disks for deepsea to 'target' them.</p>
<p dir="ltr">Can you wipe the partition's beginning, disk beginning and disk end ?</p>
<p dir="ltr">Should be something like:</p>
<p dir="ltr">for partition in /dev/sdX[0-9]*<br />
do  <br />
dd if=/dev/zero of=$partition bs=4096 count=1 oflag=direct done  </p>
<p dir="ltr">dd if=/dev/zero of=/dev/sdX bs=512 count=34 oflag=direct<br />
</p>
<p dir="ltr">dd if=/dev/zero of=/dev/sdX bs=512 count=33 \ <br />
seek=$((`blockdev --getsz /dev/sdX` - 33)) oflag=direct</p>
<p dir="ltr">And then create a gpt partition table:</p>
<p dir="ltr">sgdisk -Z --clear -g /dev/sdX</p>
<p dir="ltr">Source: <a href="https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF">
</a><a href="https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF">https://www.google.bg/url?sa=t&source=web&rct=j&url=https://www.suse.com/documentation/suse-enterprise-storage-5/pdfdoc/book_storage_deployment/book_storage_deployment.pdf&ved=2ahUKEwj_2ouC4p7jAhWkwqYKHd7OBJUQFjAAegQIARAB&usg=AOvVaw3g9_lOOBwwzqK3siEkNbnF</a></p>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
<div>On Jul 6, 2019 00:41, Allen Sellars <<a href="mailto:asellars@vigilantnow.com">asellars@vigilantnow.com</a>> wrote:<br />
<blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div>
<p><span style="font-size:11pt">I have a cisco UCS S3260 with 52 6TB spinning disks and 4 SSDs as DB disks.</span></p>
<p><span style="font-size:11pt"> </span></p>
<p><span style="font-size:11pt">I have no profile-* configs in the proposals directory.</span></p>
<p><span style="font-size:11pt"> </span></p>
<p><span style="font-size:11pt">I’ve obscured FQDNs</span></p>
<p><span style="font-size:11pt"> </span></p>
<p><span style="font-size:11pt">Stages 0-2 run fine with no failures. I see the following in stage 3:</span></p>
<p><span style="font-size:11pt">When I run salt-run state.orch ceph.stage.3 my salt-master return this:</span></p>
<p><span style="font-size:11pt"> </span></p>
<p><span style="font-size:11pt">firewall                 : disabled</span></p>
<p><span style="font-size:11pt">apparmor                 : disabled</span></p>
<p><span style="font-size:11pt">subvolume                : skipping</span></p>
<p><span style="font-size:11pt">DEV_ENV                  : True</span></p>
<p><span style="font-size:11pt"></span></p></div></div></blockquote></div></div></blockquote></div></div></blockquote></div>