[Deepsea-users] install calamari

Bo Jin bo.jin at suse.com
Tue Jan 10 05:33:59 MST 2017


Problem solved. You are right I didn't have 
/etc/ceph/ceph.client.admin.keyring on the mon nodes.
So I modified the policy.cfg and redeployed stage 2 and 3. Then calamari 
started to show contents.

Thanks Tim
Bo

On 01/10/2017 03:58 AM, Tim Serong wrote:
> On 01/09/2017 10:51 PM, Bo Jin wrote:
>> Hi,
>> I deployed a ceph cluster using ses4 deepsea. Now I want to install
>> calamari on the master server. I did it but if I open calamari web
>> interface it still asks me to use ceph-deploy to all cluster nodes in
>> order to get information about cluster and nodes.
>>
>> But I thought if calamari is using salt and deepsea already deployed
>> salt to the cluster nodes and minions why shouldn't calamari just pick
>> up the minion information?
>
> Calamari includes some salt state files which set up a scheduled job on
> the ceph nodes to check cluster status.  That's how it knows there's a
> cluster, i.e. it's not enough for the minions to exist, if calamari's
> ceph.heartbeat function isn't running properly, it won't realise the
> cluster is there.  But, this should have all been set up automatically.
>
> Here's some things to check:
>
> 1) Make sure the salt state files included in the calamari-server
> package haven't been edited or mangled by something else.  Run `rpm -q
> --verify calamari-server` - if this gives no output, you're good.  If it
> shows any of the files in /srv/salt or /srv/reactor as having been
> changed somehow, that might be the source of the problem.
>
> 2) Run `salt '*' state.highstate` on the master and see if that fixes it
> (maybe the salt state included with calamari-server simply wasn't
> applied yet somehow?)
>
> 3) Make sure the ceph client admin keyring
> (/etc/ceph/ceph.client.admin.keyring) is installed on all the MON nodes
> .  Calamari's ceph.heartbeat function won't give proper cluster state if
> this is not present.
>
> 4) Try `salt '*' ceph.get_heartbeats --output json` on the master and
> see if you get a blob of cluster status information.
>
>> Next question is: I installed openattic on a separate node which is not
>> salt master. It works so far excep "node" tab in openattic UI. Someone
>> told me that for node view openattic must be running on salt master.
>> So how should openattic and calamari co-exist in a such environment? Or
>> what is a best practice to have calamari running in parallel to openattic?
>
> You could give salt-master a second IP address, and tweak
> /etc/apache2/conf.d/calamari.conf so that calamari is only available on
> that IP address.  For example, on one of my test systems:
>
> - eth0 is 192.168.12.225
> - eth0:0 is 192.168.12.226
> - I edited /etc/apache2/conf.d/calamari.conf and changed
>   <VirtualHost *:80> to <VirtualHost 192.168.12.226:80>
> - Restart apache, and openATTIC is accessible on http://192.168.12.225/,
>   calamari on http://192.168.12.226/
>
> Regards,
>
> Tim
>

-- 
Bo Jin
Sales Engineer
SUSE Linux
Mobile: +41792586688
bo.jin at suse.com
www.suse.com


More information about the Deepsea-users mailing list