[Deepsea-users] DeepSea fail to deploy stage 3

Strahil Nikolov hunter86_bg at yahoo.com
Tue Jun 25 09:19:45 MDT 2019


On June 24, 2019 10:23:30 PM GMT+03:00, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>Hello All,
>
>I'm seeking your help as I can't go over stage 3 of CEPH deployment.
>
>Here is some info:
>1. I have deployed 3 VMs for a test cluster -> openSUSE 15.1 
>2. Current repos: 
1 | CEPH-EXTRA                | CEPH-EXTRA           
>             | Yes     | (r ) Yes  | Yes     |
>https://download.opensuse.org/repositories/filesystems:/ceph:/mimic/openSUSE_Leap_15.0/ 

>   2 | DEEPSEA                   | DEEPSEA                            |
>Yes     | (r ) Yes  | Yes     |
>https://download.opensuse.org/repositories/filesystems:/ceph:/nautilus/openSUSE_Leap_15.1/

> 8 | repo-non-oss              | Non-OSS Repository                 |
>Yes     | (r ) Yes  | Yes     |
>http://download.opensuse.org/distribution/leap/15.1/repo/non-oss/     

>    9 | repo-oss                  | Main Repository   
>                | Yes     | (r ) Yes  | Yes     |
>http://download.opensuse.org/distribution/leap/15.1/repo/oss/         

>  12 | repo-update               | Main Update
>Repository             | Yes     | (r ) Yes  | Yes     |
>http://download.opensuse.org/update/leap/15.1/oss      
               
> 13 | repo-update-non-oss       | Update Repository
>(Non-Oss)        | Yes     | (r ) Yes  | Yes     |
>http://download.opensuse.org/update/leap/15.1/non-oss/                 
>    
>3. Deployed /etc/hosts for all nodes
>4. Picked node1 as salt-master and all nodes as salt-minion
>5. Contents of /srv/pillar/ceph/master_minion.sls on node1:
>master_minion: node1.localdomain 
>6. Set the grain for all nodes: salt 'node*' grains.append deepsea
>default 
>7. Deployed stage 0: salt-run state.orch ceph.stage.0
>8. Deployed stage 1: salt-run state.orch ceph.stage.1
>9. Created a policy (most probably something is wrong there):
>
>cat /srv/pillar/ceph/proposals/policy.cfg 
> cluster-ceph/cluster/node1.localdomain.sls
>cluster-ceph/cluster/node2.localdomain.sls 
>cluster-ceph/cluster/node3.localdomain.sls
>config/stack/default/ceph/cluster.yml 
>config/stack/default/global.yml 
>role-admin/cluster/node1.localdomain.sls 
>role-admin/cluster/node2.localdomain.sls 
>role-admin/cluster/node3.localdomain.sls 
>role-benchmark-blockdev/cluster/node1.localdomain.sls 
>role-benchmark-blockdev/cluster/node2.localdomain.sls 
>role-benchmark-blockdev/cluster/node3.localdomain.sls 
>role-benchmark-fs/cluster/node1.localdomain.sls 
>role-benchmark-fs/cluster/node2.localdomain.sls 
>role-benchmark-fs/cluster/node3.localdomain.sls 
>role-benchmark-rbd/cluster/node1.localdomain.sls 
>role-benchmark-rbd/cluster/node2.localdomain.sls 
>role-benchmark-rbd/cluster/node3.localdomain.sls 
>role-client-cephfs/cluster/node1.localdomain.sls 
>role-client-cephfs/cluster/node2.localdomain.sls 
>role-client-cephfs/cluster/node3.localdomain.sls 
>role-client-iscsi/cluster/node1.localdomain.sls 
>role-client-iscsi/cluster/node2.localdomain.sls 
>role-client-iscsi/cluster/node3.localdomain.sls 
>role-client-nfs/cluster/node1.localdomain.sls 
>role-client-nfs/cluster/node2.localdomain.sls 
>role-client-nfs/cluster/node3.localdomain.sls 
>role-client-radosgw/cluster/node1.localdomain.sls 
>role-client-radosgw/cluster/node2.localdomain.sls 
>role-client-radosgw/cluster/node3.localdomain.sls 
>role-ganesha/cluster/node1.localdomain.sls 
>role-ganesha/cluster/node2.localdomain.sls 
>role-ganesha/cluster/node3.localdomain.sls 
>role-grafana/cluster/node1.localdomain.sls 
>role-grafana/cluster/node2.localdomain.sls 
>role-grafana/cluster/node3.localdomain.sls 
>role-igw/cluster/node1.localdomain.sls 
>role-igw/cluster/node2.localdomain.sls 
>role-igw/cluster/node3.localdomain.sls 
>role-master/cluster/node1.localdomain.sls 
>role-master/cluster/node2.localdomain.sls 
>role-master/cluster/node3.localdomain.sls 
>role-mds/cluster/node1.localdomain.sls 
>role-mds/cluster/node2.localdomain.sls 
>role-mds/cluster/node3.localdomain.sls 
>role-mgr/cluster/node1.localdomain.sls 
>role-mgr/cluster/node2.localdomain.sls 
>role-mgr/cluster/node3.localdomain.sls 
>role-mon/cluster/node1.localdomain.sls 
>role-mon/cluster/node2.localdomain.sls 
>role-mon/cluster/node3.localdomain.sls 
>role-prometheus/cluster/node1.localdomain.sls 
>role-prometheus/cluster/node2.localdomain.sls 
>role-prometheus/cluster/node3.localdomain.sls 
>role-rgw/cluster/node1.localdomain.sls 
>role-rgw/cluster/node2.localdomain.sls 
>role-rgw/cluster/node3.localdomain.sls 
>role-storage/cluster/node1.localdomain.sls 
>role-storage/cluster/node2.localdomain.sls 
>role-storage/cluster/node3.localdomain.sls
>
>10.Deployed stage 2:
>salt-run state.orch ceph.stage.2
>
>11.Tried to deploy stage 3 but fail: 
>
>I have noticed that the firewall was claimed to be down , but I was
>sure it wasn't. So deepsea fail to detect firewalld, and I had disabled
>it manually and repeated the stage 3 deployment.
>Sadly the effect is the same. So far I have the feeling that my
>proposal is wrong or the storage server count (3) is not enough.
>
>My stage3 debug is here: 
https://drive.google.com/open?id=1-2K8aI31eQivX4EsRO-WrJOjGO9HfSAE
>
>
>
>Best Regards,
>Strahil Nikolov

I'm resending my email due to subscription issues.
If you did receive it - please accept my apologies.

Best Regards,
Strahil Nikolov


More information about the Deepsea-users mailing list