[caasp-beta] CAASP v4 beta 3
Ann Davis
AnDavis at suse.com
Tue Jul 9 08:18:37 MDT 2019
Hi,
We have another (separate) report of this same error. Apparently it did not happen with beta 2...
Thanks,
Ann
> On Jul 7, 2019, at 4:36 PM, Roger Klorese <roger.klorese at suse.com> wrote:
>
> I haven’t seen it - I’m sure someone in engineering will look at the post on Europe Monday...
>
> Roger B.A. Klorese
> Senior Product Manager
> SUSE
> 705 5th Ave S, Suite 1000
> Seattle WA 98104
> (P)+1 206.217.7432
> (M)+1 425.444.5493
> roger.klorese at suse.com
> Schedule a meeting: https://doodle.com/RogerKlorese
> GPG Key: D567 F186 A6AE D244 067E 95E4 E67D 019F 0670 D9CC
>
>> On Jul 7, 2019, at 3:35 PM, Ns, Rushi <rushi.ns at sap.com> wrote:
>>
>>
>> Looks like its bug as I have tried several nodes but the error is always same.
>>
>> I0707 15:32:37.970409 5379 ssh.go:167] running command: "sudo sh -c 'rm -rf /tmp/kured.d'"
>> I0707 15:32:38.155906 5379 states.go:40] === state kured.deploy applied successfully ===
>> I0707 15:32:38.155934 5379 states.go:35] === applying state skuba-update.start ===
>> I0707 15:32:38.156319 5379 ssh.go:167] running command: "sudo sh -c 'systemctl enable --now skuba-update.timer'"
>> I0707 15:32:38.278443 5379 ssh.go:190] stderr | Failed to enable unit: Unit file skuba-update.timer does not exist.
>> F0707 15:32:38.281186 5379 bootstrap.go:48] error bootstraping node: failed to apply state skuba-update.start: Process exited with status 1
>>
>>
>> BR,
>>
>> Rushi.
>> I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE
>>
>>
>> From: Rushi NS <rushi.ns at sap.com>
>> Date: Sunday, July 7, 2019 at 2:55 PM
>> To: Roger Klorese <roger.klorese at suse.com>
>> Cc: Jeff Price <JPrice at suse.com>, "caasp-beta at lists.suse.com" <caasp-beta at lists.suse.com>
>> Subject: Re: [caasp-beta] CAASP v4 beta 3
>>
>> I see this after trying with “v5”
>> . I have HA (NGINX as load balancer during the cluster init)
>>
>>
>> I0707 14:55:10.527098 5173 deployments.go:50] uploading local file "kubeadm-init.conf" to remote file "/tmp/kubeadm-init.conf"
>> I0707 14:55:10.527232 5173 files.go:29] uploading to remote file "/tmp/kubeadm-init.conf" with contents
>> I0707 14:55:10.741276 5173 ssh.go:167] running command: "sudo sh -c 'kubeadm init --config /tmp/kubeadm-init.conf --skip-token-print '"
>> I0707 14:55:10.866573 5173 ssh.go:190] stdout | [init] Using Kubernetes version: v1.14.1
>> I0707 14:55:10.866632 5173 ssh.go:190] stdout | [preflight] Running pre-flight checks
>> I0707 14:55:11.136692 5173 ssh.go:190] stderr | error execution phase preflight: [preflight] Some fatal errors occurred:
>> I0707 14:55:11.136727 5173 ssh.go:190] stderr | [ERROR Port-6443]: Port 6443 is in use
>> I0707 14:55:11.136737 5173 ssh.go:190] stderr | [ERROR Port-10251]: Port 10251 is in use
>> I0707 14:55:11.136795 5173 ssh.go:190] stderr | [ERROR Port-10252]: Port 10252 is in use
>> I0707 14:55:11.136825 5173 ssh.go:190] stderr | [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
>> I0707 14:55:11.136839 5173 ssh.go:190] stderr | [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
>> I0707 14:55:11.136848 5173 ssh.go:190] stderr | [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
>> I0707 14:55:11.136863 5173 ssh.go:190] stderr | [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
>> I0707 14:55:11.136872 5173 ssh.go:190] stderr | [ERROR Port-10250]: Port 10250 is in use
>> I0707 14:55:11.136879 5173 ssh.go:190] stderr | [ERROR Port-2379]: Port 2379 is in use
>> I0707 14:55:11.136886 5173 ssh.go:190] stderr | [ERROR Port-2380]: Port 2380 is in use
>> I0707 14:55:11.136894 5173 ssh.go:190] stderr | [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
>> I0707 14:55:11.136910 5173 ssh.go:190] stderr | [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
>> I0707 14:55:11.138914 5173 ssh.go:167] running command: "sudo sh -c 'rm /tmp/kubeadm-init.conf'"
>> F0707 14:55:11.237429 5173 bootstrap.go:48] error bootstraping node: failed to apply state kubeadm.init: Process exited with status 1
>>
>> BR,
>>
>> Rushi.
>> I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE
>>
>>
>> From: Roger Klorese <roger.klorese at suse.com>
>> Date: Sunday, July 7, 2019 at 2:27 PM
>> To: Rushi NS <rushi.ns at sap.com>
>> Cc: Jeff Price <JPrice at suse.com>, "caasp-beta at lists.suse.com" <caasp-beta at lists.suse.com>
>> Subject: Re: [caasp-beta] CAASP v4 beta 3
>>
>> Run skuba with “-v5” - you will probably see a clear error message in the verbose messages.
>>
>>
>>
>> Roger B.A. Klorese
>> Senior Product Manager
>> SUSE
>> 705 5th Ave S, Suite 1000
>> Seattle WA 98104
>> (P)+1 206.217.7432
>> (M)+1 425.444.5493
>> roger.klorese at suse.com
>> Schedule a meeting: https://doodle.com/RogerKlorese
>> GPG Key: D567 F186 A6AE D244 067E 95E4 E67D 019F 0670 D9CC
>>
>>
>>
>> On Jul 7, 2019, at 1:55 PM, Ns, Rushi <rushi.ns at sap.com> wrote:
>>
>> Hi jeff,
>>
>> Sorry to bother , I see that new CAASP introduced SKUBA (replace of CAASPCTL ) and I started deploying a new cluster with SKUBA and ran into the same issue as before with SSH-agent .
>>
>> I did the ssh-agent and did the copy of ssh-copy-id to all all the nodes and I can ssh without password but after all doing the skube boot strap is failing
>>
>> Setup ssh-agent
>>
>> eval "$(ssh-agent -s)"
>>
>> add the keys to agent
>>
>> ssh-add
>>
>>
>>
>> skuba node bootstrap --user root --target lv1host lmaster1
>> -------
>>
>> ** This is a BETA release and NOT intended for production usage. **
>> [bootstrap] updating init configuration with target information
>> W0707 13:52:59.141264 8016 ssh.go:306]
>> The authenticity of host '10.48.164.174:22' can't be established.
>> ECDSA key fingerprint is 39:99:33:64:be:7d:a9:db:90:f6:93:67:3b:b9:3e:73.
>> I0707 13:52:59.141375 8016 ssh.go:307] accepting SSH key for "lv1host:22"
>> I0707 13:52:59.141412 8016 ssh.go:308] adding fingerprint for "lv1host:22" to "known_hosts"
>> [bootstrap] writing init configuration for node
>> [bootstrap] applying init configuration to node
>> F0707 13:53:01.718642 8016 bootstrap.go:48] error bootstraping node: failed to apply state kubeadm.init: Process exited with status 1
>>
>>
>>
>>
>> Do you know what the root cause of the issue now…
>>
>> BR,
>>
>> Rushi.
>> I MAY BE ONLY ONE PERSON, BUT I CAN BE ONE PERSON WHO MAKES A DIFFERENCE
>>
>>
>> From: Jeff Price <JPrice at suse.com>
>> Date: Thursday, May 30, 2019 at 7:05 PM
>> To: Rushi NS <rushi.ns at sap.com>
>> Cc: "Le Bihan Stéphane (AMUNDI-ITS)" <stephane.lebihan at amundi.com>, Jean Marc Lambert <jean-marc.lambert at suse.com>, "caasp-beta at lists.suse.com" <caasp-beta at lists.suse.com>
>> Subject: Re: [caasp-beta] CAASP v4 beta 3
>>
>> Setup ssh-agent
>>
>> eval "$(ssh-agent -s)"
>>
>> add the keys to agent
>>
>> ssh-add
>> _______________________________________________
>> caasp-beta mailing list
>> caasp-beta at lists.suse.com
>> http://lists.suse.com/mailman/listinfo/caasp-beta
> _______________________________________________
> caasp-beta mailing list
> caasp-beta at lists.suse.com
> http://lists.suse.com/mailman/listinfo/caasp-beta
More information about the caasp-beta
mailing list