[caasp-beta] kubeconfig download error with DEX internal server error , Login error

Ns, Rushi rushi.ns at sap.com
Tue Nov 21 08:59:42 MST 2017


Hi Rob,

Thanks for detailed information as well filing the bug. First of all , my apologies reaching you guys directly, since the issue is rendering for many months (first we had CAASP-CLI issue with authorization and now kubeconfig download issue with DEX) , I thought I can get answers or solutions directly from SUSE guys rather than beta users which are outside SUSE. This is the main reason to contact you guys.

With your information I will file bugs going forward and reach only the beta users.

Thank you for your suggestion and help filing the bug behalf of me. I have my account with your bug system (https://bugzilla.suse.com/show_bug.cgi?id=1069175) and I have tried to search the bug you filed  ang I get this error “You are not authorized to access bug #1069175” , ?






Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com>
Date: Tuesday, November 21, 2017 at 6:08 AM
To: Rushi NS <rushi.ns at sap.com>, Vincent Moutoussamy <vmoutoussamy at suse.com>
Cc: Simon Briggs <Simon.Briggs at suse.com>, Vincent Untz <VUntz at suse.com>, "caasp-beta at lists.suse.com" <caasp-beta at lists.suse.com>
Subject: Re: kubeconfig download error with DEX internal server error , Login error

Rushi,

You emailed the list about this issue yes I can see that.

However, investigating these things takes time. The engineering team need time to investigate it.

Please be patient. Vincent is unable to help you with any technical issues he is our beta program manager for all SUSE Beta Programs and will just forward the email to the list again.

I can see Martin emailed back this morning with some potential steps to follow that may help. I have attached them here for your convenience. Please attempt them and report back to the caasp-beta at lists.suse.com<mailto:caasp-beta at lists.suse.com> email

He also logged bug ID 1069175 for you with this issue.
I have asked you on numerous occasions to log a bug report before and this is now there.
If you have not done already please create a Bugzilla account with your rushi.ns at sap.com<mailto:rushi.ns at sap.com> email so I can add you as a CC to the bug (which will get you updates whenever anyone else adds comments to the bug).
If you have already logged a bug and I cannot find it then great; please email caasp-beta at lists.suse.com<mailto:caasp-beta at lists.suse.com> with the Bugzilla ID number and someone will take a look for you.

As I have suggested to you directly before, Martin is asking you to check the value entered into the External FQDN field in Velum is the correct one for your cluster. I asked you to do the same the next time you built a cluster but never heard back and I think you emailed someone else on the mailing list directly.

We ask for the bug reports as they go straight to engineering. Emailing myself, Vincent or Simon about the issue without including caasp-beta at lists.suse.com<mailto:caasp-beta at lists.suse.com> will not make you any progress as we all end up getting different versions of the story without any diagnostic history.

If the process is not followed correctly then we end up in the situation we are in now; where various people are getting the same emails from you without the information requested and no bug report logged.

Now the bug has been logged it will be investigated but unless you create an account on the SUSE Bugzilla you will not be able to see it. Once you’ve created an account please let the caasp-beta at lists.suse.com list know and we can add you to the bug Martin logged on your behalf and you can continue diagnostics there.

Please do not email myself, Simon or Vincent directly again about this issue or remove the caasp-beta at lists.suse.com<mailto:caasp-beta at lists.suse.com> list from the CC as this makes the email thread very hard to follow and will make the whole process take longer. Emailing random SUSE employees about an issue with no history of the issue or any diagnostic information requested is only going to slow things down in the long run and make it harder for our engineers to help you.

Now we have a bug logged for you someone soon will email you and the caasp-beta at lists.suse.com<mailto:caasp-beta at lists.suse.com> with something to try or asking for some diagnostic info. Please do provide it and leave the caasp-beta at lists.suse.com<mailto:caasp-beta at lists.suse.com> email on CC as this gives your email the widest possible audience and best opportunity for someone to help.

Thank you for your patience,
Rob
-----
Rob de Canha-Knight
EMEA Platform and Management Technical Strategist
SUSE
rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>
(P)  +44 (0) 1635 937689
(M) +44 (0) 7392 087303
(TW) rssfed23<https://twitter.com/rssfed23>
----
[cid:image001.png at 01D3629E.B6B16430]
[cid:image002.png at 01D3629E.B6B16430]
[cid:image003.png at 01D3629E.B6B16430] <https://twitter.com/suse> [cid:image004.png at 01D3629E.B6B16430] <https://www.linkedin.com/in/rssfed23/>  [cid:image005.png at 01D3629E.B6B16430] <https://www.facebook.com/rssfed23>  [cid:image006.png at 01D3629E.B6B16430]  <https://plus.google.com/+SUSE/posts> [cid:image007.png at 01D3629E.B6B16430] <https://www.youtube.com/channel/UC4Td4XfKcnfYd0RPRKietxA>

From: "Ns, Rushi" <rushi.ns at sap.com>
Date: Monday, 20 November 2017 at 23:20
To: Rob de Canha-Knight <rob.decanha-knight at suse.com>, Vincent Moutoussamy <vmoutoussamy at suse.com>
Cc: Simon Briggs <Simon.Briggs at suse.com>, Vincent Untz <VUntz at suse.com>
Subject: kubeconfig download error with DEX internal server error , Login error

Hi Rob,

I did to reach betalist email but I wasn’t getting any response. Now I am stuck with this DEX error .

Can someone from your team can help. We are getting lot of requests to build with SUSE CAASP as you guys already certified with SAP VORA and this becames a show stopper to me with this error.

https://www.suse.com/communities/blog/sap-vora-2-0-released-suse-caasp-1-0/

@Vincent Moutoussamy<mailto:vmoutoussamy at suse.com>: Can you help

here is the problem
=======================
I built the cluster with latest SUSE CAASP 2.0 and was  getting errors with dex authentication  when downloading  kubeconfig file from velum webinterface.

Did anyone experience this error. I did multiple setups (multi master and single master) but both clusters have the same error.

  My initial thought of this error with multi master setup  (I have setup with multi master first ), however even with single master I got the same error, so not sure if this a bug  but I can’t download kubeconfig file from velum.

I got this error

--------
“internal server error , Login error”
    ------


    My login to velum works fine with the same credentials, however for download kubeconfig file the authentication is failing .

    Let me know if anyone experience the same.


Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com>
Date: Tuesday, November 14, 2017 at 2:56 PM
To: Rushi NS <rushi.ns at sap.com>
Cc: Simon Briggs <Simon.Briggs at suse.com>, Vincent Untz <VUntz at suse.com>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Rushi,

As advised in the previous mail I’m unable to provide any additional support to you on this matter and I have to direct you to obtain support through the usual channels for any additional queries.

So please reach out to the caasp-beta mailing list or use Bugzilla to log a bug for investigation if you think the process is being followed correctly as we have not seen this issue internally during 2.0 testing in any of our environments or other beta user environments so we would appreciate the bug report so it can be investigated and fixed by our engineering team if it is indeed a problem with the product.

Please note though that due to our HackWeek that we run at SUSE this week you may experience a slightly delayed response to both the caasp-beta mailing list as well as anything put through Bugzilla as in effect our product and engineering teams are off this week.

Rob

-----
Rob de Canha-Knight
EMEA Platform and Management Technical Strategist
SUSE
rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>
(P)  +44 (0) 1635 937689
(M) +44 (0) 7392 087303
(TW) rssfed23<https://twitter.com/rssfed23>
----
[cid:image008.png at 01D3629B.1C16B720]
[cid:image009.png at 01D3629B.1C16B720]
[cid:image010.png at 01D3629B.1C16B720] <https://twitter.com/suse> [cid:image011.png at 01D3629B.1C16B720] <https://www.linkedin.com/in/rssfed23/>  [cid:image012.png at 01D3629B.1C16B720] <https://www.facebook.com/rssfed23>  [cid:image013.png at 01D3629B.1C16B720]  <https://plus.google.com/+SUSE/posts> [cid:image014.png at 01D3629B.1C16B720] <https://www.youtube.com/channel/UC4Td4XfKcnfYd0RPRKietxA>

From: "Ns, Rushi" <rushi.ns at sap.com>
Date: Tuesday, 14 November 2017 at 18:05
To: Rob de Canha-Knight <rob.decanha-knight at suse.com>
Cc: Simon Briggs <Simon.Briggs at suse.com>, Vincent Untz <VUntz at suse.com>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Hi Rob,

Did you get  a chance to check my mail and any solution to this  problem . Do you think this is bug in the release. Like I said I have tried multi master as well single mater, both iterations the errors result is same

Do think if any proxy issues. As you know the systems are behind proxy and I use proxy parameters during the setup.

Here is my screenshot of proxy settings. Let me know if anyway to fix this. I can share my screen if you have few mins. this is really killing my team as I need to setup a  SUSE based kubernetes which I was trying to do with KUBEADM but I am still hoping CAASP will overcome the issues with KUBEADM alterative  but its not going as per my expectations



[cid:image015.png at 01D3629B.1C16B720]



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rushi NS <rushi.ns at sap.com>
Date: Friday, November 10, 2017 at 3:09 PM
To: Rob de Canha-Knight <rob.decanha-knight at suse.com>
Cc: Simon Briggs <Simon.Briggs at suse.com>, Vincent Untz <VUntz at suse.com>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Hi Rob,

I have tried with using Dashboard host as admin node as you mentioned (velum host)  , after doing everything I got the same error. I think this could be problem with multi master.

I did another test with single master and it has the same error.

Not sure likely where this error but I did everything correct based on your suggestion.


[cid:image016.png at 01D3629B.1C16B720]



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rushi NS <rushi.ns at sap.com>
Date: Friday, November 10, 2017 at 11:59 AM
To: Rob de Canha-Knight <rob.decanha-knight at suse.com>
Cc: Simon Briggs <Simon.Briggs at suse.com>, Vincent Untz <VUntz at suse.com>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Hi Rob,

Ok got it . Because of multi master I do require round robin which either the admin node or something with Laos balancer .

Let me try this fix by rebuilt with multi master and if it fails then I will try with single master .

Keep you posted.

Have a nice weekend .

Best Regards,

Rushi.
Success is not a matter of being the best & winning the race. Success is a matter of handling the worst & finishing the race

Sent from my iPhone
please excuse typos and brevity

On Nov 10, 2017, at 11:30, Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>> wrote:
In the k8s external fqdn that must be a load balancer set up externally from the cluster if doing multi-master.

The external dashboard fqdn must be the value of the fqdn that velum is running on the admin node. If your admin node is lvsusekub1 then put that in there.

Doing multi-master on bare metal requires a loadbalancer and it’s that loadbalancer address that goes in the top box. If you don’t have a loadbalancer then you can put in any of the master node fqdns and it will work. So put lvsusekub3 in the top box and lvsusekub1 in the bottom box and you can do round robin DNS on your dns server.

It’s worth noting that once you enter those values they are fixed and to change them you have to rebuild the cluster from scratch.

If this is a development environment I recommend using a single master node and putting that value in the top box and the admin node fqdn in the bottom box. Start simple and build up from there.

I’m signing off now for the weekend. Have a good weekend.


-----
Rob de Canha-Knight
EMEA Platform and Management Technical Strategist
SUSE
rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>
(P)  +44 (0) 1635 937689
(M) +44 (0) 7392 087303
(TW) rssfed23<https://twitter.com/rssfed23>
----
<image001.png>
<image002.png>
<image003.png> <https://twitter.com/suse> <image004.png><https://www.linkedin.com/in/rssfed23/> <image005.png><https://www.facebook.com/rssfed23> <image006.png> <https://plus.google.com/+SUSE/posts> <image007.png><https://www.youtube.com/channel/UC4Td4XfKcnfYd0RPRKietxA>

From: "Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date: Friday, 10 November 2017 at 19:24
To: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Cc: Simon Briggs <Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>, Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Ok , I agree some point (two boxes only ..i put two boxes with same hostname “lvsusekub3” and lvsusekube3.pal.sap.corp). I setup with 3 masters as I mentioned before and this host  LVSUSEKUB3 is the  first master node hostname . I did make sure everything right except FQDN

<image008.png>

<image009.png>


Question>: what is second box I should put hostname
My admin node: lvsusekub1




Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Date: Friday, November 10, 2017 at 11:19 AM
To: Rushi NS <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Simon Briggs <Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>, Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

I've identified your problem.

The first box is the k8s API endpoint.

This field has to be set to the kubernetes master fqdn. I think you have it set to your admin node fqdn and that’s why things are not working.

You’ll have to destroy your cluster and make sure that the top field in your screenshot has the fqdn of the k8s master node not the admin node (those two boxes must have different addresses in)

-----
Rob de Canha-Knight
EMEA Platform and Management Technical Strategist
SUSE
rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>
(P)  +44 (0) 1635 937689
(M) +44 (0) 7392 087303
(TW) rssfed23<https://twitter.com/rssfed23>
----
<image010.png>
<image011.png>
<image012.png> <https://twitter.com/suse> <image013.png><https://www.linkedin.com/in/rssfed23/> <image014.png><https://www.facebook.com/rssfed23> <image015.png> <https://plus.google.com/+SUSE/posts> <image016.png><https://www.youtube.com/channel/UC4Td4XfKcnfYd0RPRKietxA>

From: "Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date: Friday, 10 November 2017 at 19:17
To: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Cc: Simon Briggs <Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>, Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Hi Rob,

Answer to your queries.

You must make sure that you are accessing velum from the right FQDN – the one you gave velum during the setup process when it asks for the internal and external dashboard FQDN.
I set this during API FQDN



<image017.png>

I did make sure no plugin blocks (java sript)

Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Date: Friday, November 10, 2017 at 11:13 AM
To: Rushi NS <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>, Simon Briggs <Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

You must make sure that you are accessing velum from the right FQDN – the one you gave velum during the setup process when it asks for the internal and external dashboard FQDN.

Aside from that make sure you’ve not got any browser plugins that are blocking scripts or javascript from running.

If you still cannot get it to work then you will have to wait for the 2.0 final release next week and try that. If you run into issues there I cannot help as it doesn’t fall into my role and you’ll have to use the official channels for support.

-----
Rob de Canha-Knight
EMEA Platform and Management Technical Strategist
SUSE
rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>
(P)  +44 (0) 1635 937689
(M) +44 (0) 7392 087303
(TW) rssfed23<https://twitter.com/rssfed23>
----
<image018.png>
<image019.png>
<image020.png> <https://twitter.com/suse> <image021.png><https://www.linkedin.com/in/rssfed23/> <image022.png><https://www.facebook.com/rssfed23> <image023.png> <https://plus.google.com/+SUSE/posts> <image024.png><https://www.youtube.com/channel/UC4Td4XfKcnfYd0RPRKietxA>

From: "Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date: Friday, 10 November 2017 at 19:09
To: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Cc: Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Thanks. I did the setup with 3 master and 1 minions and its worked nicely  but while downloading kubectl file the authentication I set during velum setup is not accepted and I get error downloading the kubectl file  >

<image025.png>


<image026.png>



Also I got the error you stated  (not being able to talk to the velum API. When this happens please refresh your browser page and accept the new certificate.) I refresh but I didn’t get any where accept new certificate but all worked.



<image027.png>







Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Date: Friday, November 10, 2017 at 10:32 AM
To: Rushi NS <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

It supports multi master and yes; your precious mail is correct.
Sent from my iPhone - please excuse any shortness

On 10 Nov 2017, at 18:29, Ns, Rushi <rushi.ns at sap.com<mailto:rushi.ns at sap.com>> wrote:
Hi Rob,

Is this release supports multi master (controllers – etcd) or single master.



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rushi NS <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date: Friday, November 10, 2017 at 10:17 AM
To: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Cc: Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Hi Rob,

Perfect and Thanks, I just downloaded and will start deploying and keep you posted.

As I understand 2.0 is removed the caasp-cli authentication ?  and everything should work as it was before with 1.0 using kubeconfig file downloaded from VELUM web.




Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Date: Friday, November 10, 2017 at 10:01 AM
To: Rushi NS <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

November 16th

However; you can download our latest release candidate ISO from https://drive.google.com/file/d/1ZO0sduyV5GS3WThl0eLVjnMNHCaFIi5u/view?usp=sharing  which doesn’t require you to use caasp-cli.

One note; during the bootstrap process you will get an error at the top about not being able to talk to the velum API. When this happens please refresh your browser page and accept the new certificate.

Once you have done this it will be able to talk to the API and you’re good to go. To obtain the kubeconfig file you click the button and this will redirect you to a new login page where you enter in your caas platform admin account credentials and it will offer your browser a download of the kubeconfig that has the correct client certificate in it.

Many thanks,
Rob

-----
Rob de Canha-Knight
EMEA Platform and Management Technical Strategist
SUSE
rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>
(P)  +44 (0) 1635 937689
(M) +44 (0) 7392 087303
(TW) rssfed23<https://twitter.com/rssfed23>
----
<image001.png>
<image002.png>
<image003.png> <https://twitter.com/suse> <image004.png><https://www.linkedin.com/in/rssfed23/> <image005.png><https://www.facebook.com/rssfed23> <image006.png> <https://plus.google.com/+SUSE/posts> <image007.png><https://www.youtube.com/channel/UC4Td4XfKcnfYd0RPRKietxA>

From: "Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date: Friday, 10 November 2017 at 17:58
To: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Cc: Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

HI Rob,

What is  the ETA for  2.0 release ?



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


From: Rob de Canha-Knight <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>>
Date: Tuesday, November 7, 2017 at 2:32 PM
To: Rushi NS <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Carsten Duch <carsten.duch at suse.com<mailto:carsten.duch at suse.com>>, Johannes Grassler <Johannes.Grassler at suse.com<mailto:Johannes.Grassler at suse.com>>, Michal Jura <MJura at suse.com<mailto:MJura at suse.com>>, Nicolas Bock <nicolas.bock at suse.com<mailto:nicolas.bock at suse.com>>, Simon Briggs <Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>, Vincent Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re: KUBEADM method install kubernetes clusters on SUSE 12 SP1/SP2

Thanks Rushi - yes sticking with CaaSP will make your life much easier and enable you to get support as well once a suitable support contract/agreement is in place.

When 2.0 is released we will have an updated user manual and deployment guide in the usual place (https://www.suse.com/documentation/suse-caasp/index.html) for you to consume so don’t worry you won’t get in any trouble :)

Rob
Sent from my iPhone - please excuse any shortness

On 7 Nov 2017, at 23:27, Ns, Rushi <rushi.ns at sap.com<mailto:rushi.ns at sap.com>> wrote:
Hi Rob,

Thank you. Yes, I am sticking to “CAASP” only , since had issues with authorization I wanted to try out with kubeadm to setup a cluster for our DMZ internet facing for federation. KUBEADM is working but its pain as CAASP works nice with everything based on PXE which is what I would like to have in my future builds.



If you say the 2.0 is coming out next, then I will wait . please provide the doucemntation how you consume 2.0 , so that I don’t get any trouble.

Thank you so much for your quick reply.



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


On 11/7/17, 2:22 PM, "Rob de Canha-Knight" <rob.decanha-knight at suse.com<mailto:rob.decanha-knight at suse.com>> wrote:

   Hi Rushi.

   As mentioned on the thread I just sent you; the method Simon is referring to there is the manual upstream way to deploy Kubernetes.

   It is separate and very different from CaaSP and is completely unsupported in every way.

   As such; we cannot help you here with the kubeadm way in any way shape or form.

   Please stick with CaaSP for now if you can or want assistance from us. The version that doesn’t require you to use caasp-cli will be released by the end of next week (2.0 final) and you will be able to deploy that successfully and if you run into any issues we can help you.

   As a side note I kindly request that you use the CaaSP-beta mailing list for your queries as you did in the past or log a support ticket when you run into issues with the final release.
   You are likely to get a better response faster than emailing our product team directly plus the knowledge will be archived publicly for everyone else to benefit.

   Many thanks,
   Rob

   Sent from my iPhone - please excuse any shortness

















On 7 Nov 2017, at 23:13, Ns, Rushi <rushi.ns at sap.com<mailto:rushi.ns at sap.com>> wrote:

Hello Simon,

How are you . Long time.

I have some Question. Not sure if you can answer. As you know we are doing test of “CAASP” from SUSE , however it is bit pain as CAASP-CLI authentication is boiling down the cluster without access. Rob is aware what I was talking.


Since the CAASP is still issue with CAASP-CLI , I was thinking if SLES12 SP1 can work with KUBEADM method to install cluster. Did anyone tried from your side. I found this link but not sure
https://forums.suse.com/archive/index.php/t-9637.html. Do you know who is “simon (smflood)” is that you :(  on the above link , he said he did install with KUBEADM using SLES 12 SP1 and SP2 where he has given images links to

https://software.opensuse.org/download.html?project=Virtualization%3Acontainers&package=kubernetes


can someone help me if KUBEADM method to insetall kubernetes cluster on SUSE 12 SP1/Sp2.


Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE


On 3/14/17, 2:26 AM, "Simon Briggs" <simon.briggs at suse.com<mailto:simon.briggs at suse.com>> wrote:

  Hi Rushi,

  I am part of the team delivering our Expert Day in Rome today so cannot
  make a call, but I want to make sure things are progressing for you.

  Please advise if Michal's advise worked or if you have new challenges we
  can help with.

  Thanks

  Simon Briggs


  On 10/03/17 09:10, Simon Briggs wrote:
Hi Rushi,

AJ has answered the CaaSP question.

Bit I can help explain that SOC7 is now fully GA and can be downloaded
freely from the https://www.suse.com/download-linux/ Cloud click through.

Thanks

Simon


On 09/03/17 21:54, Ns, Rushi wrote:
Hi Michaal,

Any update on this. I am eagerly waiting for the change as I wil
start the setup again when SOC7 GA comes out.

@Vincent: Do you know when SOC7 GA comes out . Also CaaS Beta ?

Best Regards,
Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE

On 2/23/17, 7:14 AM, "Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>> wrote:

   Hi Michal,
        Good to hear that it's doable, yes please test at your end
and let me know. I will wait for your confirmation and procedure how
to consume our designated SDN vlan.
Best Regards,

Rushi.
   Success is not a matter of being the best & winning the race.
Success is a matter of handling the worst & finishing the race
        Sent from my iPhone
   please excuse typos and brevity
On Feb 23, 2017, at 03:04, Michal Jura <mjura at suse.com<mailto:mjura at suse.com>>
wrote:

Hi Rushi,

It should be possible to use VLAN ID 852 for Magnum private
network. You should configure network with name private in advance
with vlan ID 852, but I have to test it first.

Changing subnet to 192.168.x.x should be durable too, but I
have to check it.

Please give me some time and I will come back to you.

Best regards,
Michal

On 02/22/2017 11:01 PM, Ns, Rushi wrote:
Hi Carsten,.

Thank you. As you know we have VLAN ID *852* as SDN in
network.json
which is already in our switch level. Here I have question or
suggestion. Can I use this VLAN 852 for  Magnum side as L2
traffic ? we
do not want to use 10.x.x.x IP space, so we use non-routable
192.168.x.x
kind of IP space which will route through our 852 VLAN .

Is it possible to define this in Heat Template, so that cluster
deployment will generate 192.168.x.x subnet instead of
10.x.x.x subnet
when a kubernetes cluster created?

Best Regards,

Rushi.

I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A
DIFFERENCE

*From: *Carsten Duch <carsten.duch at suse.com<mailto:carsten.duch at suse.com>>
*Date: *Wednesday, February 22, 2017 at 10:21 AM
*To: *"Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>, Johannes Grassler
<Johannes.Grassler at suse.com<mailto:Johannes.Grassler at suse.com>>, Michal Jura <MJura at suse.com<mailto:MJura at suse.com>>,
Vincent Untz
<VUntz at suse.com<mailto:VUntz at suse.com>>
*Cc: *Nicolas Bock <nicolas.bock at suse.com<mailto:nicolas.bock at suse.com>>, Simon Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
*Subject: *AW: Weekly review of SAP Big Data SOC 7 testing

Hi Rushi,

Theater Problem is that you have configured it to use the
vlans from 222
to 2222. You have to choose a range which is allowed on the
Trunk port
and not already in use.

If you want to change the starting Point you have to redeploy
the whole
cloud and provide the correct vlan id when editing the
network.json.

So without that, you are only able to change the max number
up to a
value you are able to use. Maybe 50 for 222 to 272. Or try
vxlan instead
of vlan again. But I think that the overall problem is a
misconfigured
switch. Make sure that all vlan ids are allowed for the Trunk
and you
will have a good chance that it works.

Von meinem Samsung Galaxy Smartphone gesendet.

-------- Ursprüngliche Nachricht --------

Von: "Ns, Rushi" <rushi.ns at sap.com<mailto:rushi.ns at sap.com>>

Datum: 22.02.17 19:04 (GMT+01:00)

An: Carsten Duch <carsten.duch at suse.com<mailto:carsten.duch at suse.com>>, Johannes Grassler
<Johannes.Grassler at suse.com<mailto:Johannes.Grassler at suse.com>>, Michal Jura <MJura at suse.com<mailto:MJura at suse.com>>,
Vincent Untz
<VUntz at suse.com<mailto:VUntz at suse.com>>

Cc: Nicolas Bock <nicolas.bock at suse.com<mailto:nicolas.bock at suse.com>>, Simon Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>

Betreff: Re: Weekly review of SAP Big Data SOC 7 testing

HI Carsten

Yes I am aware as we discussed this during our call and after
reading
your response, however the vlan 222-322 is already used in our
production particularly 271 is our Laptop VLAN (All employees
get IP
address of the Laptops ) which we cannot use it for this. I
am looking
for alternatives. Let me know if you have any idea other than
this
222-322 allow ?




Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A
DIFFERENCE


On 2/21/17, 10:38 PM, "Carsten Duch" <carsten.duch at suse.com<mailto:carsten.duch at suse.com>>
wrote:

  Hi Rushi,
  have you tried to configure your switch according to my
email from
14th?

  Maybe you didn't got the mail? I suggested the following
configuration
  on the switch:

  Your are using linuxbridge with vlan.

  Make sure to allow tagging of VLANs on the switch and add
the range to
  the allowed VLANs for the TRUNK.

  The range is defined by your fixed vlan and the maximum
number of
VLANs.
  starting point:
  fixed VLAN id = 222
  + Maximum Number of VLANs configured in the Neutron
Barclamp= 2000

  That means that you have to allow a range from 222 to
2222 on your
  switch side. But I would recommend to reduce the Maximum
so that it
will
  not overlap with other existing VLANs.
  You can reduce it to 100 or something lower and then
allow a range
from
  222 to 322 for the TRUNK Port.
  You don't need to create all the VLANs manually but you
need to allow
  VLAN tagging for the Port and allow a range.

  Depending on your switch,the configuration should look
something like:
  switchport trunk allow vlan 222-322


http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide/AccessTrunk.html

  Make sure to allow all the VLANs from your network.json
for the
TRUNK Port.



  On 21.02.2017 23:40,  Ns, Rushi   wrote:
Hi Michal,

Yes, that’s obviously the root cause I found before
your email
but it is cumbersome to understand the flow of the
segmentation ID
which I need to discuss how we can overcome.

What I observe is,  every time I create new cluster the
private
network generates a new segment ID:: 271, 272, 273 like
that…(this is
like VLAN) which our floating VLAN should be able to reach
only when we
add this segment ID (dummy ID 231,232 or whatever generates)
to our
swith level as real VLAN otherwise the private network subnet
cannot
reach to floating IP . check attached picture contains the
information
of segmentation ID:  I remember I had one session with one of
your SuSE
person  (carsten.duch at suse.com<mailto:carsten.duch at suse.com>) recently I shared my system
screen and
we discussed this network segment  issue (Software Defined
Networ) and
he answered some of that , however it appeared its beyond is
knowledge.
I have CC’d Carsten here., so you can talk to him.

Do you have any idea what needs to be done on the physical
network swtich level where the VLANs  already connected but
not this
VLAN (271, 272,whatever) because this is really not easy to
allow in
real network switch configuration of the VLAN to allow this
trunked port
which doesn’t exist at all. We had the same issue before in
deploying
cloud foundry on top of openstack and we fool the switch with
the
private segment ID created and at the end we found this is a
bug in
openstack SDN side.

Let me know what needs to be done and I can do that.



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO
MAKES A
DIFFERENCE


On 2/21/17, 4:27 AM, "Michal Jura" <mjura at suse.com<mailto:mjura at suse.com>> wrote:

  Hi,

  This problem looks like there is no connection from
private
network
  where kube-master and kube-mionions are launched to
Heat
PublicURL endpoint.

  Please fix network configuration.



  On 02/20/2017 08:10 PM, Johannes Grassler wrote:
Hello Rushi,

alright, so we are creating a cluster now but the
Kubernetes master
fails to signal success to the Heat API (that's what
WaitConditionTimeout means). Unfortunately this
is where
debugging
becomes fairly hard...can you ssh to the cluster's
Kubernetes master and get me
/var/log/cloud-init.log and
/var/log/cloud-init-output.log please? Maybe we
are lucky and
find the cause of the problem in these logs. If
there's
nothing useful
in there I'll probably have to come up with some
debugging instrumentation next...

Cheers,

Johannes

On 02/20/2017 07:53 PM, Ns, Rushi wrote:
Hi Johannes,

Thanks, I just tried with the changes you
mentioned and I
see that it
made some progress this time (creating private
network
subnet, heat
stack and instance as well cluster ) , however
after some
time it
failed with “ CREATE_FAILED” status.

Here is the log incase if you want to dig in more.

================
2017-02-20 09:01:18.148 92552 INFO
oslo.messaging._drivers.impl_rabbit
[-] [6c39b368-bbdf-40cd-b1a5-b14da062f692]
Reconnected to
AMQP server
on 10.48.220.40:5672 via [amqp] clientwith port
36265.
2017-02-20 10:36:25.914 92552 INFO
magnum.conductor.handlers.cluster_conductor
[req-dddc4477-407f-4b84-afbd-f8b657fd02c6 admin
openstack
- - -] The
stack None was not found during cluster deletion.
2017-02-20 10:36:26.515 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-dddc4477-407f-4b84-afbd-f8b657fd02c6 admin
openstack
- - -]
Deleting certificate
e426103d-0ecf-4044-9383-63305c667a
c2 from the local filesystem. CertManager type
'local'
should be used
for testing purpose.
2017-02-20 10:36:26.517 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-dddc4477-407f-4b84-afbd-f8b657fd02c6 admin
openstack
- - -]
Deleting certificate
a9a20d33-7b54-4393-8385-85c4900a0f
79 from the local filesystem. CertManager type
'local'
should be used
for testing purpose.
2017-02-20 10:37:39.905 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-d50c84af-7eca-4f76-8e2b-dc49933d0376 admin
openstack
- - -]
Storing certificate data on the local
filesystem. CertM
anager type 'local' should be used for testing
purpose.
2017-02-20 10:37:40.049 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-d50c84af-7eca-4f76-8e2b-dc49933d0376 admin
openstack
- - -]
Storing certificate data on the local
filesystem. CertM
anager type 'local' should be used for testing
purpose.
2017-02-20 10:48:48.172 92552 ERROR
magnum.conductor.handlers.cluster_conductor
[req-ac20eb45-8ba9-4b73-a771-326122e94ad7
522958fb-fd7c-4c33-84d2-1ae9e60c1574 - - - -]
Cluster
error, stack
status: CREATE_
FAILED, stack_id:
e47d528d-f0e7-4a40-a0d3-12501cf5a984,
reason:
Resource CREATE failed: WaitConditionTimeout:


resources.kube_masters.resources[0].resources.master_wait_condition: 0
of 1 received
2017-02-20 10:48:48.510 92552 INFO
magnum.service.periodic
[req-ac20eb45-8ba9-4b73-a771-326122e94ad7
522958fb-fd7c-4c33-84d2-1ae9e60c1574 - - - -]
Sync up
cluster with id
15 from CREATE_IN_PROGRESS to CRE
ATE_FAILED.


Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE
PERSON WHO
MAKES A DIFFERENCE


On 2/20/17, 9:51 AM, "Johannes Grassler"
<johannes.grassler at suse.com<mailto:johannes.grassler at suse.com>>
wrote:

  Hello Rushi,

  I took a closer look at the SUSE driver and
`--discovery-url none`
will definitely take care of any etcd problems.
  The thing I'm not quite so sure about is the
registry
bit. Can you
please try the following...

  magnum cluster-template-create --name
k8s_template\
                                   --image-id
sles-openstack-magnum-kubernetes \
                                   --keypair-id
default \
--external-network-id
floating \
                                   --dns-nameserver
8.8.8.8 \
                                   --flavor-id
m1.magnum \
                                   --master-flavor-id
m1.magnum \
--docker-volume-size 5 \
                                   --network-driver
flannel \
                                   --coe kubernetes \
--floating-ip-enabled \
                                   --tls-disabled \
                                   --http-proxy
http://proxy.pal.sap.corp:8080

  magnum cluster-create --name k8s_cluster \
--cluster-template k8s_template \
--master-count 1 \
--node-count 2 \
--discovery-url none

  ...and see if that yields a working cluster
for you?
It still
won't work in a
  completely disconnected environment, but
with the
proxy you have
in place it
  should work.

  Some explanation: the --discovery-url none will
disable the
validation check
  that causes the GetDiscoveryUrlFailed error,
allowing
Magnum to
instantiate the
  Heat template making up the cluster. The

     --http-proxy http://proxy.pal.sap.corp:8080

  will then cause the cluster to try and
access the
Docker registry
through the
  proxy.

  As far as I understand our driver, the

    --registry-enabled
    --labels registry_url=URL

  will require you to set up a local docker
registry in
a network
reachable from
  the Magnum bay's instances and specify a URL
pointing
to that
docker registry.
  I'd rather not ask you to do that if access
through
the proxy
turns out to work.

  Cheers,

  Johannes

  On 02/20/2017 04:23 PM, Ns, Rushi wrote:
Hi Johannes,
I have also added https_proxy parameter
thought it
might need
both (http and https) but even that failed too.
I see the
log expected
to have discovery etcd.

magnum cluster-template-create --name
k8s_template                        --image-id
sles-openstack-magnum-kubernetes
--keypair-id
default
--external-network-id
floating                        --dns-nameserver
8.8.8.8                        --flavor-id
m1.magnum                        --master-flavor-id
m1.magnum
--docker-volume-size
5 --network-driver
flannel                        --coe kubernetes
--floating-ip-enabled
--tls-disabled
--http-proxy http://proxy.pal.sap.corp:8080
--https-proxy
http://proxy.pal.sap.corp:8080


magnum-conductor.log
=====================

2017-02-20 07:17:50.390 92552 ERROR
oslo_messaging.rpc.server
discovery_endpoint=discovery_endpoint)
2017-02-20 07:17:50.390 92552 ERROR
oslo_messaging.rpc.server
GetDiscoveryUrlFailed: Failed to get discovery
url from
'https://discovery.etcd.io/new?size=1'.
2017-02-20 07:17:50.390 92552 ERROR
oslo_messaging.rpc.server



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE
PERSON
WHO MAKES A
DIFFERENCE


On 2/20/17, 7:16 AM, "Ns, Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
wrote:

  Hello Johannes,

  No luck even after adding the internet
proxy at
the time of
cluster template creation and without specify
anything at the
cluster-create . The cluster create failed and
this time I
don’t see
anything like , no heat stack created, no private
kubernetes network
subnet created and many.

  Here are the commands I tried. Let me
know if
this is how
supposed to be used  or am I doing something wrong.


  magnum cluster-template-create --name
k8s_template                        --image-id
sles-openstack-magnum-kubernetes
--keypair-id
default
--external-network-id
floating                        --dns-nameserver
8.8.8.8                        --flavor-id
m1.magnum                        --master-flavor-id
m1.magnum
--docker-volume-size
5 --network-driver
flannel                        --coe kubernetes
--floating-ip-enabled
--tls-disabled
--http-proxy http://proxy.pal.sap.corp:8080


  magnum cluster-create --name k8s_cluster
--cluster-template
k8s_template --master-count 1 --node-count 2

  this is the magnum-conductor.log I see
something
more needed .

  2017-02-20 06:55:27.245 92552 ERROR
magnum.drivers.common.template_def [-]
HTTPSConnectionPool(host='discovery.etcd.io<http://discovery.etcd.io>',
port=443):
Max retries
exceeded with url: /new?size=1 (Caused by
NewConnectionError



('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object
at 0x7f1588461210>: Failed to establish a new
connection:
[Errno 113]
EHOSTUNREACH',))
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server [-] Exception during
message
handling
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server Traceback (most recent
call last):
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
133, in _process_incoming
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server     res =
self.dispatcher.dispatch(message)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 150, in dispatch
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server     return
self._do_dispatch(endpoint,
method, ctxt, args)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 121, in _do_dispatch
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server     result =
func(ctxt, **new_args)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py",
line 165, in cluster_create
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server     create_timeout)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py",
line 97, in _create_stack
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server
_extract_template_definition(context,
cluster))
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py",
line 82, in _extract_template_definition
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server
scale_manager=scale_manager)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/common/template_def.py",
line 337, in extract_definition
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server
self.get_params(context,
cluster_template, cluster, **kwargs),
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/k8s_opensuse_v1/template_def.py",
line 50, in get_params
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server
extra_params['discovery_url'] =
self.get_discovery_url(cluster)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/common/template_def.py",
line 445, in get_discovery_url
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server
discovery_endpoint=discovery_endpoint)
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server GetDiscoveryUrlFailed:
Failed to get
discovery url from
'https://discovery.etcd.io/new?size=1'.
  2017-02-20 06:55:27.304 92552 ERROR
oslo_messaging.rpc.server
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server [-] Can not
acknowledge message.
Skip
processing
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server Traceback (most recent
call last):
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
126, in _process_incoming
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     message.acknowledge()
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
line 119, in acknowledge
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server
self.message.acknowledge()
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py",
line 251, in acknowledge
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server
self._raw_message.ack()
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/kombu/message.py", line
88, in ack
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server
self.channel.basic_ack(self.delivery_tag)
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/amqp/channel.py", line
1584, in
basic_ack
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server
self._send_method((60, 80),
args)
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/amqp/abstract_channel.py",
line 56,
in _send_method
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     self.channel_id,
method_sig,
args, content,
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/amqp/method_framing.py",
line 221,
in write_method
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     write_frame(1,
channel, payload)
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/amqp/transport.py", line
188, in
write_frame
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     frame_type,
channel, size,
payload, 0xce,
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/eventlet/greenio/base.py",
line 385,
in sendall
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     tail =
self.send(data, flags)
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/eventlet/greenio/base.py",
line 379,
in send
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     return
self._send_loop(self.fd.send,
data, flags)
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server File

"/usr/lib/python2.7/site-packages/eventlet/greenio/base.py",
line 366,
in _send_loop
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server     return
send_method(data, *args)
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server error: [Errno 104]
Connection
reset by peer
  2017-02-20 06:55:27.309 92552 ERROR
oslo_messaging.rpc.server
  2017-02-20 06:55:27.310 92552 ERROR
oslo.messaging._drivers.impl_rabbit [-]
[6c39b368-bbdf-40cd-b1a5-b14da062f692] AMQP
server on
10.48.220.40:5672 is unreachable: <AMQPError:
unknown
error>. Trying
again
   in 1 seconds. Client port: 50462
  2017-02-20 06:55:28.347 92552 INFO
oslo.messaging._drivers.impl_rabbit [-]
[6c39b368-bbdf-40cd-b1a5-b14da062f692]
Reconnected to AMQP
server on
10.48.220.40:5672 via [amqp] clientwith port 58264.
  2017-02-20 06:59:09.827 92552 INFO
magnum.conductor.handlers.cluster_conductor
[req-9b6be3b8-d2fd-4e34-9d08-33d66a270fb1 admin
openstack
- - -] The
stack None was not found during cluster deletion.
  2017-02-20 06:59:10.400 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-9b6be3b8-d2fd-4e34-9d08-33d66a270fb1 admin
openstack
- - -]
Deleting certificate
105d39e9-ca2a-497c-b951-df87df2a02
  24 from the local filesystem.
CertManager type
'local'
should be used for testing purpose.
  2017-02-20 06:59:10.402 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-9b6be3b8-d2fd-4e34-9d08-33d66a270fb1 admin
openstack
- - -]
Deleting certificate
f0004b69-3634-4af9-9fec-d3fdba074f
  4c from the local filesystem.
CertManager type
'local'
should be used for testing purpose.
  2017-02-20 07:02:37.658 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-0d9720f7-eb3e-4c9f-870b-e26feb26b9e2 admin
openstack
- - -]
Storing certificate data on the local
filesystem. CertM
  anager type 'local' should be used for
testing
purpose.
  2017-02-20 07:02:37.819 92552 WARNING
magnum.common.cert_manager.local_cert_manager
[req-0d9720f7-eb3e-4c9f-870b-e26feb26b9e2 admin
openstack
- - -]
Storing certificate data on the local
filesystem. CertM
  anager type 'local' should be used for
testing
purpose.
  2017-02-20 07:02:40.026 92552 ERROR
magnum.drivers.common.template_def
[req-0d9720f7-eb3e-4c9f-870b-e26feb26b9e2 admin
openstack
- - -]
HTTPSConnectionPool(host='discovery.etcd.io<http://discovery.etcd.io>',
port=443):
Max retries
   exceeded with url: /new?size=1
(Caused by


NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection
object at 0x7f158845ee90>: Failed to establish a
new
connection:
[Errno 113] EH
  OSTUNREACH',))
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server
[req-0d9720f7-eb3e-4c9f-870b-e26feb26b9e2
admin openstack - - -] Exception during message
handling
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server Traceback (most recent
call last):
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
133, in _process_incoming
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server     res =
self.dispatcher.dispatch(message)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 150, in dispatch
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server     return
self._do_dispatch(endpoint,
method, ctxt, args)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 121, in _do_dispatch
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server     result =
func(ctxt, **new_args)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py",
line 165, in cluster_create
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server     create_timeout)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py",
line 97, in _create_stack
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server
_extract_template_definition(context,
cluster))
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py",
line 82, in _extract_template_definition
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server
scale_manager=scale_manager)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/common/template_def.py",
line 337, in extract_definition
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server
self.get_params(context,
cluster_template, cluster, **kwargs),
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/k8s_opensuse_v1/template_def.py",
line 50, in get_params
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server
extra_params['discovery_url'] =
self.get_discovery_url(cluster)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/common/template_def.py",
line 445, in get_discovery_url
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server
discovery_endpoint=discovery_endpoint)
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server GetDiscoveryUrlFailed:
Failed to get
discovery url from
'https://discovery.etcd.io/new?size=1'.
  2017-02-20 07:02:40.064 92552 ERROR
oslo_messaging.rpc.server



  Best Regards,

  Rushi.
  I MAY BE ONLY ONE PERSON, BUT I CAN BE
ONE
PERSON WHO MAKES
A DIFFERENCE


  On 2/20/17, 12:41 AM, "Johannes Grassler"
<johannes.grassler at suse.com<mailto:johannes.grassler at suse.com>> wrote:

      Hello Rushi,

      On 02/20/2017 12:26 AM, Ns, Rushi
wrote:
Hi Johannes/Vincent

Thank you to both for the
detailed. I did
those steps
as per the link


https://www.suse.com/documentation/suse-openstack-cloud-7/book_cloud_suppl/data/sec_deploy_kubernetes_without.html
you provided before executing the cluster  as I
learned
this in the
document , however I am sure I did something
wrong as ii
don’t know
what public etcd discovery url since I don’t
have anything
setup on my
end.

Here are the command I used and
if you see
I specified
that parameter as you suggested but only as
“URL” without
knowing the
real  value of “URL” (--labels registry_url=URL)
, so this
is my
mistake or how it should be used ? I am not
sure, but I
followed your
document ?
----------------------------------

1)
magnum cluster-template-create
--name
k8s_template                        --image-id
sles-openstack-magnum-kubernetes
--keypair-id
default
--external-network-id
floating                        --dns-nameserver
8.8.8.8                        --flavor-id
m1.magnum                        --master-flavor-id
m1.magnum
--docker-volume-size
5 --network-driver
flannel                        --coe kubernetes
--floating-ip-enabled
--tls-disabled
--registry-enabled --labels
insecure_registry_url=URL


2) magnum cluster-create --name
k8s_cluster
--cluster-template k8s_template --master-count 1
--node-count 2
--discovery-url none
-----------------------------------

Now I would like to understand
where and
how I can
setup my own  local etcd discovery service ? is
it required.

      As far as I know etcd it is. I may
be wrong
though.
Luckily there is another solution:

Also our internet access is
through proxy port
(http://proxy.pal.sap.corp:8080) so if you can
guide how
to do that
setup,
I can do or tell me the URL
value to
specified and I
can try.

      Just add an `--http-proxy
http://proxy.pal.sap.corp:8080`<http://proxy.pal.sap.corp:8080%60> <%20%20> when
creating the
cluster template and
do NOT provide any discovery URL
      options for either the cluster
template or
the cluster
itself. Provided the proxy doesn't require
authentication
this should
do the
      trick...

      Cheers,

      Johannes


Also I wanted to inform that, we
had issue
Horizon
(public and admin page IP is not hand shake)
with BETA 8
Neutron going
with VLAN open switch, Nicolas and I had some
sessions
towards and
Nicolas suggested to use “LinuxBridge instead
openvswith”
since the
patch he has may not be in the BETA8 that I
download. .
you can check
with Nicolas on this as  our current  BEtA8
seems not good
with
VLAN/openvswitch.

At any cost, I will remove this
cluster
and rebuild it
soon but I wail wait until the full  GA build
comes out
instead of
BETA 8  or I can try if you can think the latest
BETA 8
will not have
issues overall.

Please suggest and provide me
the help for
above value
“labels insecure_registry_url=URL” or how to
setup local  etc
discovery service ?



Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT I
CAN BE ONE
PERSON WHO
MAKES A DIFFERENCE


On 2/17/17, 1:14 AM, "Vincent Untz"
<VUntz at suse.com<mailto:VUntz at suse.com>>
wrote:

Hi,

  Le vendredi 17 février 2017,
à 10:02
+0100,
Johannes Grassler a écrit :
Hello Rushi,

sorry, this took me a while to
figure out. This
is not the issue I initially thought it was.
Rather it appears to be
related to
your local
networking setup and/or the cluster template
you used. This is the crucial log
excerpt:

| 2017-02-05 21:32:52.915
92552 ERROR
oslo_messaging.rpc.server File


"/usr/lib/python2.7/site-packages/magnum/drivers/common/template_def.py",
line 445, in get_discovery_url
| 2017-02-05 21:32:52.915
92552 ERROR
oslo_messaging.rpc.server
discovery_endpoint=discovery_endpoint)
| 2017-02-05 21:32:52.915
92552 ERROR
oslo_messaging.rpc.server GetDiscoveryUrlFailed:
Failed to get
discovery url from
'https://discovery.etcd.io/new?size=1'.

Magnum uses etcd to
orchestrate its
clusters'
instances. To that end it requires a discovery
URL where cluster members
announce their
presence. By default Magnum uses the public etcd
discovery URL
https://discovery.etcd.io/new?size=%(size)d<https://discovery.etcd.io/new?size=%25(size)d>
<https://discovery.etcd.io/new?size=%25(size)d>

This will not work in an
environment
without
Internet access which I presume yours is.
The solution to this problem
is to
set up a
local etcd discovery service and configure
its URL template in magnum.conf:

[cluster]

etcd_discovery_service_endpoint_format =

https://my.discovery.service.local/new?size=%(size)d<https://my.discovery.service.local/new?size=%25(size)d>
<https://my.discovery.service.local/new?size=%25(size)d>

Ah, this use case is in our doc.
Rushi, can you
follow what's documented
at:




https://www.suse.com/documentation/suse-openstack-cloud-7/book_cloud_suppl/data/sec_deploy_kubernetes_without.html


Vincent

Cheers,

Johannes

On 02/16/2017 05:03 AM, Ns,
Rushi wrote:
Hi Simon.

Some reason the mail I sent
this
morning
didn’t go, also did’t bounced back but I found
it was
stuck in my
drafts. Anyways, sorry about the delay . Here
you go again.

Please find attached files of
magnum as requested.

Please find below output of
other
commands result.

------


root at d38-ea-a7-93-e6-64:/var/log #
openstack
user list


+----------------------------------+---------------------+
|
ID                               |
Name                |


+----------------------------------+---------------------+
|
d6a6e5c279734387ae2458ee361122eb |
admin               |
|
7cd6e90b024e4775a772449f3aa135d9 |
crowbar             |
|
ea68b8bd8e0e4ac3a5f89a4e464b6054 |
glance              |
|
c051a197ba644a25b85e9f41064941f6 |
cinder              |
|
374f9b824b9d43d5a7d2cf37505048f0 |
neutron             |
|
062175d609ec428e876ee8f6e0f39ad3 |
nova                |
|
f6700a7f9d794819ab8fa9a07997c945 |
heat                |
|
dd22c62394754d95a8feccd44c1e2857 |
heat_domain_admin   |
|
9822f3570b004cdca8b360c2f6d4e07b |
aodh                |
|
ac06fd30044e427793f7001c72f92096 |
ceilometer          |
|
d694b84921b04f168445ee8fcb9432b7 |
magnum_domain_admin |
|
bf8783f04b7a49e2adee33f792ae1cfb |
magnum              |
|
2289a8f179f546239fe337b5d5df48c9 |
sahara              |
|
369724973150486ba1d7da619da2d879 |
barbican            |
|
71dcd06b2e464491ad1cfb3f249a2625 |
manila              |
|
e33a098e55c941e7a568305458e2f8fa |
trove               |


+----------------------------------+---------------------+


root at d38-ea-a7-93-e6-64:/var/log #
openstack
domain list



+----------------------------------+---------+---------+-------------------------------------------+

| ID
| Name    |
Enabled |
Description                               |



+----------------------------------+---------+---------+-------------------------------------------+

| default
| Default |
True    | The default
domain                        |
|
f916a54a4c0b4a96954bad9f9b797cf3
| heat    |
True    | Owns users and projects created by
heat   |
|
51557fee0408442f8aacc86e9f8140c6
| magnum  |
True    | Owns users and projects created by
magnum |



+----------------------------------+---------+---------+-------------------------------------------+




root at d38-ea-a7-93-e6-64:/var/log #
openstack
role assignment list



+----------------------------------+----------------------------------+-------+----------------------------------+----------------------------------+-----------+

|
Role                             |
User                             | Group |
Project                          |
Domain                           |
Inherited |



+----------------------------------+----------------------------------+-------+----------------------------------+----------------------------------+-----------+

|
6c56316ecd36417184629f78fde5694c |
d6a6e5c279734387ae2458ee361122eb |       |
6d704aa281874622b02a4e24954ede18
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
7cd6e90b024e4775a772449f3aa135d9 |       |
7a18242f8e1c4dd9b42d31facb79493f
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
d6a6e5c279734387ae2458ee361122eb |       |
7a18242f8e1c4dd9b42d31facb79493f
|                                  |
False     |
|
932db80652074571ba1b98738c5af598 |
7cd6e90b024e4775a772449f3aa135d9 |       |
7a18242f8e1c4dd9b42d31facb79493f
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
ea68b8bd8e0e4ac3a5f89a4e464b6054 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
ea68b8bd8e0e4ac3a5f89a4e464b6054 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
c051a197ba644a25b85e9f41064941f6 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
c051a197ba644a25b85e9f41064941f6 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
374f9b824b9d43d5a7d2cf37505048f0 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
374f9b824b9d43d5a7d2cf37505048f0 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
062175d609ec428e876ee8f6e0f39ad3 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
062175d609ec428e876ee8f6e0f39ad3 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
f6700a7f9d794819ab8fa9a07997c945 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
f6700a7f9d794819ab8fa9a07997c945 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
932db80652074571ba1b98738c5af598 |
d6a6e5c279734387ae2458ee361122eb |       |
7a18242f8e1c4dd9b42d31facb79493f
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
dd22c62394754d95a8feccd44c1e2857 |
|                                  |
f916a54a4c0b4a96954bad9f9b797cf3
| False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
9822f3570b004cdca8b360c2f6d4e07b |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
9822f3570b004cdca8b360c2f6d4e07b |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
ac06fd30044e427793f7001c72f92096 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
ac06fd30044e427793f7001c72f92096 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
d694b84921b04f168445ee8fcb9432b7 |
|                                  |
51557fee0408442f8aacc86e9f8140c6
| False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
bf8783f04b7a49e2adee33f792ae1cfb |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
bf8783f04b7a49e2adee33f792ae1cfb |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
2289a8f179f546239fe337b5d5df48c9 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
2289a8f179f546239fe337b5d5df48c9 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
369724973150486ba1d7da619da2d879 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
369724973150486ba1d7da619da2d879 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
71dcd06b2e464491ad1cfb3f249a2625 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
71dcd06b2e464491ad1cfb3f249a2625 |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
9fe2ff9ee4384b1894a90878d3e92bab |
e33a098e55c941e7a568305458e2f8fa |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |
|
6c56316ecd36417184629f78fde5694c |
e33a098e55c941e7a568305458e2f8fa |       |
19c2c03e858b47da83eda020aa83639e
|                                  |
False     |



+----------------------------------+----------------------------------+-------+----------------------------------+----------------------------------+-----------+




Best Regards,

Rushi.
I MAY BE ONLY ONE PERSON, BUT
I CAN
BE ONE
PERSON WHO MAKES A DIFFERENCE


On 2/15/17, 11:01 AM, "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>> wrote:

 Hi Simon,

 I am sorry, I got stuck.
Sure I
will send
the logs now .


 Best Regards,

 Rushi.
 I MAY BE ONLY ONE PERSON,
BUT I
CAN BE ONE
PERSON WHO MAKES A DIFFERENCE


 On 2/15/17, 10:26 AM, "Simon
Briggs"
<simon.briggs at suse.com<mailto:simon.briggs at suse.com>> wrote:

     Hi Rushi,

     I assume you where
unable
to join our
call.

     Would it be possible to
collect the
logs that we request, as this is the
     only way my teams can
help
you remotely.

     Regards

     Simon Briggs


     On 15/02/17 08:58,
Johannes
Grassler
wrote:
Hello Rushi,

ok. Can you please
supply

1) A supportconfig
tarball: this will
have the contents
 of both
/etc/magnum/magnum.conf.d/
and magnum-conductor.log
 which should
allow me
to figure
out what is wrong.

2) The output of
`openstack user
list`, `openstack domain list`,
 `openstack role
assignment list`
(all run as the admin user).

With that
information I
should be
able to figure out whether your
problem is the one I
mentioned earlier.

Cheers,

Johannes

On 02/14/2017 04:42
PM,
Ns, Rushi wrote:
Hello Johannes,

Thank you for the
information. FYI,
my setup is not on HA .

Best Regards,

Rushi.
I MAY BE ONLY ONE
PERSON, BUT I CAN
BE ONE PERSON WHO MAKES A DIFFERENCE


On 2/14/17, 12:43 AM,
"Johannes
Grassler"

<Johannes.Grassler at suse.com<mailto:Johannes.Grassler at suse.com>> wrote:

  Hello Rushi,

  if the problem
is the

  | Creating
cluster
failed for
the following reason(s): Failed to
create trust Error
ID:
c7a27e1f-6a6a-452e-8d29-a38dbaa3fd78, Failed
to create trust
Error ID:
a9f328cc-05e8-4c87-9876-7db5365812f2

  error you
mentioned
below, the
problem is likely to be with
Magnum rather than
with
Heat. Magnum
creates a Keystone trust for
  each cluster that
the cluster's
VMs use to talk to the Magnum API
among others. We
had a
spell of
trouble[0] with that
  recently and
you may
be running
into the same problem, especially
if you are running
an HA
setup. Are
you? If so, check if
  all files in
/etc/magnum/magnum.conf.d/ match across all
controller nodes. If
there are
differences, especially in the [trust]
  section you
are probably
affected by the same issue we ran into
recently.

  Cheers,

  Johannes


  [0]

https://github.com/crowbar/crowbar-openstack/pull/843


  On 02/14/2017
09:01
AM, Simon
Briggs wrote:
Hi Rushi,

You advise that
you still have
an issue.   Would this still be
the same as the
one that
Vincent
helped with below?

I have added
Johannes to those
CC'd as he is skilled in
debugging that
type of
error.

Thanks
Simon



Sent from my
Samsung device
--------
Original
message
--------
From:
Vincent Untz
<VUntz at suse.com<mailto:VUntz at suse.com>>
Date:
06/02/2017 12:39
(GMT+02:00)
To: Rushi Ns
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Michal Jura
<MJura at suse.com<mailto:MJura at suse.com>>, Nicolas Bock

<nicolas.bock at suse.com<mailto:nicolas.bock at suse.com>>,
Simon
Briggs <Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Subject: Re:
Weekly review of
SAP Big Data SOC 7 testing

Rushi,

About the
"Failed
to create
trust": can you check the heat
logs? My
guess is
that the
error comes
from there and more context about
what's
happening
around
that error
would probably be useful.

Thanks,

Vincent

Le lundi 06
février 2017, à
04:01 +0000, Ns, Rushi a écrit :
Hi Simon

Thank you.
Please
try if
Michal can give some information
about the image of
kubernetes and
how to consume. To me I have full
knowledge of
kubernetes
since from
long time also we are in
production
kubernetes in
Germany for
many projects which I did.
Anyways, please
try to
get Michal
for 1 or 2 hours discussion so that
I get idea also
please
help to find
the image from the link provided
is not available
at this
time.



http://download.suse.de/ibs/Devel:/Docker:/Images:/SLE12SP2-JeOS-k8s-magnum/images/sles-openstack-magnum-kubernetes.x86_64.qcow2


@Michal: Would
you be kind to
help me to get the Kuberentes
image  as bove
link  is
not working


Regards to
SAHARA, I made
progress of upload image (mirantis
prepared images of
SAHARA Hadoop)
and created the necessary
configuration
(cluster
templates,
node templates and everything) and
at the final
creating a
cluster from
template erord with the
following. , so I
really
need
someone from your team having SAHARA
knowledge would
help to
get the
issue fixed.

here is the
error
while
creating cluster.

Creating
cluster
failed for
the following reason(s): Failed to
create trust Error
ID:
c7a27e1f-6a6a-452e-8d29-a38dbaa3fd78, Failed
to create trust
Error ID:
a9f328cc-05e8-4c87-9876-7db5365812f2



[cid:image001.png at 01D27FEA.9C6BC8F0]

Best Regards,

Rushi.
I MAY BE
ONLY ONE
PERSON, BUT
I CAN BE ONE PERSON WHO MAKES A
DIFFERENCE


From: Simon
Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Date:
Saturday,
February 4,
2017 at 1:57 AM
To: "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Cc: Michal
Jura
<MJura at suse.com<mailto:MJura at suse.com>>, Nicolas Bock

<nicolas.bock at suse.com<mailto:nicolas.bock at suse.com>>,
Vincent
Untz <VUntz at suse.com<mailto:VUntz at suse.com>>
Subject: Re:
Weekly review of
SAP Big Data SOC 7 testing

Hi Rushi,

Thanks for the
update and I'm
glad we are moving forward.
We'll done everyone.

Michal is
indeed
an expert
around these services, though I am
aware he is
presently on
a sprint
team mid cycle so he may find it
difficult to do
his required
workload and deal with external work as
well.   So please be
patient if it
takes a small amount of time for
him to respond

Thanks
Simon



Sent from my
Samsung device


--------
Original
message
--------
From: "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date:
04/02/2017
02:19
(GMT+00:00)
To: Simon
Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Cc: Michal
Jura
<MJura at suse.com<mailto:MJura at suse.com>>, Nicolas Bock

<nicolas.bock at suse.com<mailto:nicolas.bock at suse.com>>
Subject: Re:
Weekly review of
SAP Big Data SOC 7 testing
HI Simon,

Just to
give you
update.  The
Horizon issue was resolved
changing the
Nuetron from
OPENVSWITCH to LinuxBridge as mentioned by
Nick. Now I need
to move
forward for
SAHARA which I can try,  but if
I run into issues, I
might need some
expertise who will be having
SAHARA knowledge from
your team.

Regards to
other
request
Magnum (kubernetes) I would like to
discuss with
Michal Jura
(mjura at suse.com<mailto:mjura at suse.com>)<mailto:mjura at suse.com)>, I
have Cc’d here as
I was
going
through his github document

https://github.com/mjura/kubernetes-demo but
wasn’t able
to find the
image as he specified
Link to the image



http://download.suse.de/ibs/Devel:/Docker:/Images:/SLE12SP2-JeOS-k8s-magnum/images/sles-openstack-magnum-kubernetes.x86_64.qcow2



Best Regards,

Rushi.
I MAY BE
ONLY ONE
PERSON, BUT
I CAN BE ONE PERSON WHO MAKES A
DIFFERENCE


From: Simon
Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Date: Friday,
February 3,
2017 at 10:38 AM
To: "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Subject: Re:
Weekly review of
SAP Big Data SOC 7 testing

Hi,

Sorry about
delaying you.

I will
coordinate
with Nick
to get the best resource for you.

Thanks
Simon



Sent from my
Samsung device


--------
Original
message
--------
From: "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date:
03/02/2017
18:33
(GMT+00:00)
To: Simon
Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Subject: Re:
Weekly review of
SAP Big Data SOC 7 testing
Hi Simon,

Thank you, I
waited on the
call, however the toll free number
is not US number
which
call never
went through(the Toll free seems UK
),  but I stayed on
GOtoMEETiNG for
15 mins and disconnected.
Sure, I
will sync
up with
Nick and yes you are right it seems
not aa code issue,
however we are
not sure which I will check with
Nick in about 1
hour .

Keep you
posted.

Also I need
help
on Magnum
(kubernetes side as well) I see a
person Michal Jura
(mjura at suse.com<mailto:mjura at suse.com>)<mailto:mjura at suse.com)> I spoke
with Nick to bring
Michal on another
call to start the Magnum stuff.
Can you try to
arrange
Michal to be
with me next week for a short
call after this
Horizon
issue fixed
and SAHARA works only after I
will work with
Michal Jura.



Best Regards,

Rushi.
I MAY BE
ONLY ONE
PERSON, BUT
I CAN BE ONE PERSON WHO MAKES A
DIFFERENCE


From: Simon
Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Date: Friday,
February 3,
2017 at 10:28 AM
To: "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Subject: Re:
Accepted: Weekly
review of SAP Big Data SOC 7
testing

Hi Rushi,

I'm afraid
because I'm used
to finishing at dinner on Fridays
and so it slipped my
mind that we
had a 6pm arranged. Sorry.

I am available
now to talk if
you want,  though I have spoken
to Nick and he
advised
he has tested
your Horizon setup and it works
OK on his replica
environment of
what you have.  With this situation
we can only work with
the premises
that the Horizon issue is not a
code problem but is
local to your
configuration.

He did say
he was
going to
try and help you today on this
matter.   Did this
help?

Kind regards
Simon Briggs




Sent from my
Samsung device


--------
Original
message
--------
From: "Ns,
Rushi"
<rushi.ns at sap.com<mailto:rushi.ns at sap.com>>
Date:
02/02/2017
14:22
(GMT+00:00)
To: Simon
Briggs
<Simon.Briggs at suse.com<mailto:Simon.Briggs at suse.com>>
Subject:
Accepted: Weekly
review of SAP Big Data SOC 7 testing




--
Les gens
heureux
ne sont pas
pressés.


  --
  Johannes
Grassler,
Cloud Developer
  SUSE Linux
GmbH, HRB
21284 (AG
Nürnberg)
  GF: Felix
Imendörffer, Jane
Smithard, Graham Norton
  Maxfeldstr. 5,
90409
Nürnberg,
Germany









--
Johannes Grassler, Cloud
Developer
SUSE Linux GmbH, HRB 21284 (AG
Nürnberg)
GF: Felix Imendörffer, Jane
Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg,
Germany

  --
Les gens heureux ne sont pas
pressés.



      --
      Johannes Grassler, Cloud Developer
      SUSE Linux GmbH, HRB 21284 (AG
Nürnberg)
      GF: Felix Imendörffer, Jane
Smithard, Graham
Norton
      Maxfeldstr. 5, 90409 Nürnberg,
Germany





  --
  Johannes Grassler, Cloud Developer
  SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
  GF: Felix Imendörffer, Jane Smithard, Graham
Norton
  Maxfeldstr. 5, 90409 Nürnberg, Germany









  --
  Mit freundlichen Grüßen / Best regards

  Carsten Duch
  Sales Engineer
  SUSE
  Nördlicher Zubringer 9-11

  40470 Düsseldorf
  (P)+49 173 5876 707
  (H)+49 521 9497 6388
  carsten.duch at suse.com<mailto:carsten.duch at suse.com>

  --
  SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard,
Graham Norton,
  HRB 21284 (AG Nürnberg)










<image001.png>
<image002.png>
<image003.png>
<image004.png>
<image005.png>
<image006.png>
<image007.png>
<image008.png>
<image009.png>
<image010.png>
<image011.png>
<image012.png>
<image013.png>
<image014.png>
<image015.png>
<image016.png>
<image017.png>
<image018.png>
<image019.png>
<image020.png>
<image021.png>
<image022.png>
<image023.png>
<image024.png>
<image025.png>
<image026.png>
<image027.png>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 2963 bytes
Desc: image001.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 1199 bytes
Desc: image002.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 796 bytes
Desc: image003.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.png
Type: image/png
Size: 770 bytes
Desc: image004.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.png
Type: image/png
Size: 762 bytes
Desc: image005.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image006.png
Type: image/png
Size: 950 bytes
Desc: image006.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image007.png
Type: image/png
Size: 808 bytes
Desc: image007.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image008.png
Type: image/png
Size: 2965 bytes
Desc: image008.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image009.png
Type: image/png
Size: 1193 bytes
Desc: image009.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image010.png
Type: image/png
Size: 798 bytes
Desc: image010.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0009.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image011.png
Type: image/png
Size: 772 bytes
Desc: image011.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0010.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image012.png
Type: image/png
Size: 764 bytes
Desc: image012.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0011.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image013.png
Type: image/png
Size: 952 bytes
Desc: image013.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0012.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image014.png
Type: image/png
Size: 810 bytes
Desc: image014.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0013.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image015.png
Type: image/png
Size: 82375 bytes
Desc: image015.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0014.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image016.png
Type: image/png
Size: 47411 bytes
Desc: image016.png
URL: <http://lists.suse.com/pipermail/caasp-beta/attachments/20171121/8b3c7da5/attachment-0015.png>


More information about the caasp-beta mailing list