From aryan1 at allantgroup.com Mon Aug 18 13:20:45 2014 From: aryan1 at allantgroup.com (Andy Ryan) Date: Mon, 18 Aug 2014 14:20:45 -0500 Subject: [sles-beta] Enabling ip forwarding causes server to reboot Message-ID: My SLES 12RC1 server will reboot when I simply do an echo 1 >> /proc/sys/net/ipv4/ip_forward. It reboots instantly and nothing is in the logs. -- Andy Ryan | Systems Administrator *Email:* aryan1 at allantgroup.com *website:* www.allantgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From snbarth at suse.de Tue Aug 19 02:26:21 2014 From: snbarth at suse.de (Stephan Barth) Date: Tue, 19 Aug 2014 10:26:21 +0200 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: References: Message-ID: <20140819082621.GC30029@lovelace.suse.de> Hi, On Mon, Aug 18, 2014 at 02:20:45PM -0500, Andy Ryan wrote: > My SLES 12RC1 server will reboot when I simply do an echo 1 >> > /proc/sys/net/ipv4/ip_forward. It reboots instantly and nothing is in the > logs. Do you really use two >>? I tried both with one and two and it worked for me in the same way. Is this on bare metal or in a VM? Which type? Or maybe you just open an SR for this if it's reproducible. -- Bye, Stephan Barth SUSE MaintenanceSecurity - SUSE LINUX Products GmbH GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer HRB 21284 (AG N?rnberg) From mge at suse.com Tue Aug 19 02:34:29 2014 From: mge at suse.com (Matthias G. Eckermann) Date: Tue, 19 Aug 2014 10:34:29 +0200 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: References: Message-ID: <20140819083429.GA3347@suse.com> Hello Andy and all, On 2014-08-18 T 14:20 -0500 Andy Ryan wrote: > My SLES 12RC1 server will reboot when I simply do an > echo 1 >> /proc/sys/net/ipv4/ip_forward. It reboots > instantly and nothing is in the logs. Do you really do a " >> ", i.e. with two " > " ? If yes, can you please try with one " > " only? One " > " is what should be used, however, using " >> " definitely should not reboot the system:-( Please open a Service Request. Thanks - MgE -- Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) From kdupke at suse.com Tue Aug 19 02:43:56 2014 From: kdupke at suse.com (Kai Dupke) Date: Tue, 19 Aug 2014 10:43:56 +0200 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: References: Message-ID: <53F30E4C.6020100@suse.com> On 08/18/2014 09:20 PM, Andy Ryan wrote: > My SLES 12RC1 server will reboot when I simply do an echo 1 >> > /proc/sys/net/ipv4/ip_forward. It reboots instantly and nothing is in the > logs. Sure it's RC1? Does not happen here. greetings Kai Dupke Senior Product Manager Server Product Line -- Phone: +49-(0)5102-9310828 Mail: kdupke at suse.com Mobile: +49-(0)173-5876766 WWW: www.suse.com SUSE Linux Products GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG Nurnberg) From Dick.Waite at softwareag.com Tue Aug 19 04:50:24 2014 From: Dick.Waite at softwareag.com (Waite, Dick) Date: Tue, 19 Aug 2014 10:50:24 +0000 Subject: [sles-beta] RC2 Dates ? Message-ID: <46AC8C81C10B8C48820201DF2AE1D76D67ADE30A@hqmbx6.eur.ad.sag> Grand Day SLES12 List, On one of the older schedules we were going to have a SLES 12 refresh on Friday 22nd. Is that still a maybe date or should we plan for a relaxing weekend with the Bar-B-Q? Schools will be starting up over the next couple of weeks. Then people who are not tied to school holidays can exit stage left, and that?s my exit. __R Software AG ? Sitz/Registered office: Uhlandstra?e 12, 64297 Darmstadt, Germany ? Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From aginies at suse.com Tue Aug 19 08:37:24 2014 From: aginies at suse.com (Antoine Ginies) Date: Tue, 19 Aug 2014 16:37:24 +0200 Subject: [sles-beta] SLES12 RC1 KVM client Productname In-Reply-To: <40637DBB36AF3941B243A286A432CA0B0F9C8574@HXMB12.pnet.ch> References: <40637DBB36AF3941B243A286A432CA0B0F9C8574@HXMB12.pnet.ch> Message-ID: <20140819143724.GA19292@linux-w520.guibland.com> urs.frey at post.ch: > Hi Hello, > > When set up a SLES12 RC1 as KVM client, the productname looks quite strange now > > h05cnh:~ # facter | grep product > productname => Standard PC (i440FX + PIIX, 1996) > h05cnh:~ # uname -a > Linux h05cnh 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux > h05cnh:~ # facter | grep product > productname => Standard PC (i440FX + PIIX, 1996) > h05cnh:~ # dmidecode | grep Product > Product Name: Standard PC (i440FX + PIIX, 1996) > h05cnh:~ # facter | grep virtual > is_virtual => true > virtual => kvm > h05cnh:~ # By default machine type for Qemu is pc-i440fx: under SLES12 host: linux-x61s:~ # qemu-system-x86_64 -M ? | grep default pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) and under SLE11SP3 host: pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > Until Beta9 the output was different and more readable, as it was under SLES11-SP3 > h039ua:~ # uname -a > Linux h039ua 3.12.22-2-default #1 SMP Fri Jun 13 13:46:18 UTC 2014 (ee1c2a2) x86_64 x86_64 x86_64 GNU/Linux > h039ua:~ # facter | grep product > productname => Bochs > h039ua:~ # dmidecode | grep Product > Product Name: Bochs > h039ua:~ # facter | grep virtual > is_virtual => true > virtual => kvm > h039ua:~ # > > Could this new unusual product name coming with SLES12-RC1 be considered as I bug? Machine type can be changed. So it's not a bug if the machine type was specified to launch the VM guest. regards. > I was searching also in SUSEConnect to see how KVM gets recognized and will be treated to be registered as virtual client > > I'll appreciate to get more information about this new product naming of a KVM client > > And also of course to get a hint where about to search in the code of suse connect to get an idea about hovKVM gets treated please > Thank you very much > > Best regards > > Urs Frey > Post CH AG > Informationstechnologie > IT Betrieb > Webergutstrasse 12 > 3030 Bern (Zollikofen) > Telefon : ++41 (0)58 338 58 70 > FAX : ++41 (0)58 667 30 07 > E-Mail: urs.frey at post.ch > > > > _______________________________________________ > sles-beta mailing list > sles-beta at lists.suse.com > http://lists.suse.com/mailman/listinfo/sles-beta -- Antoine Ginies Project Manager SUSE France From aryan1 at allantgroup.com Tue Aug 19 09:44:50 2014 From: aryan1 at allantgroup.com (Andy Ryan) Date: Tue, 19 Aug 2014 10:44:50 -0500 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: <53F30E4C.6020100@suse.com> References: <53F30E4C.6020100@suse.com> Message-ID: As far as > vs >>, that does not even come into play. I set it using sysctl -w net.ipv4.ip_forward=1 and it still rebooted. And yes, it is RC1. On Tue, Aug 19, 2014 at 3:43 AM, Kai Dupke wrote: > On 08/18/2014 09:20 PM, Andy Ryan wrote: > > My SLES 12RC1 server will reboot when I simply do an echo 1 >> > > /proc/sys/net/ipv4/ip_forward. It reboots instantly and nothing is in > the > > logs. > > Sure it's RC1? Does not happen here. > > greetings > Kai Dupke > Senior Product Manager > Server Product Line > -- > Phone: +49-(0)5102-9310828 Mail: kdupke at suse.com > Mobile: +49-(0)173-5876766 WWW: www.suse.com > > SUSE Linux Products GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany) > GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG Nurnberg) > _______________________________________________ > sles-beta mailing list > sles-beta at lists.suse.com > http://lists.suse.com/mailman/listinfo/sles-beta > -- Andy Ryan | Systems Administrator *Phone:* 630.778.2756 | *Email:* aryan1 at allantgroup.com *website:* www.allantgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From aryan1 at allantgroup.com Tue Aug 19 10:08:36 2014 From: aryan1 at allantgroup.com (Andy Ryan) Date: Tue, 19 Aug 2014 11:08:36 -0500 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: References: Message-ID: I am running pacemaker on these systems and that seems to be an issue. I stopped pacemaker, turned on ip_forward, and restarted pacemaker and the system did not reboot. It looks like it is the stonith device? The stonith device is a shared disk partition, so it should not be an issue (since the disc is on its own FC HBA). But I checked and one of the other nodes does indeed reset the node that I tried to enable ip_forwarding on. On Mon, Aug 18, 2014 at 2:20 PM, Andy Ryan wrote: > My SLES 12RC1 server will reboot when I simply do an echo 1 >> > /proc/sys/net/ipv4/ip_forward. It reboots instantly and nothing is in the > logs. > > -- > Andy Ryan | Systems Administrator > *Email:* aryan1 at allantgroup.com > *website:* www.allantgroup.com > > -- Andy Ryan | Systems Administrator *Phone:* 630.778.2756 | *Email:* aryan1 at allantgroup.com *website:* www.allantgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From urs.frey at post.ch Tue Aug 19 10:57:10 2014 From: urs.frey at post.ch (urs.frey at post.ch) Date: Tue, 19 Aug 2014 16:57:10 +0000 Subject: [sles-beta] SLES12 RC1 KVM client Productname In-Reply-To: <20140819143724.GA19292@linux-w520.guibland.com> References: <40637DBB36AF3941B243A286A432CA0B0F9C8574@HXMB12.pnet.ch> <20140819143724.GA19292@linux-w520.guibland.com> Message-ID: <40637DBB36AF3941B243A286A432CA0B0F9D0C48@HXMB12.pnet.ch> >Antoine Ginies Hi Antoine Thank you very much for your answer, I have read with very high interest. >By default machine type for Qemu is pc-i440fx: >under SLES12 host: >linux-x61s:~ # qemu-system-x86_64 -M ? | grep default >pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > >and under SLE11SP3 host: >pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) >Machine type can be changed. So it's not a bug if the machine type was >specified to launch the VM guest. ON my KVM Dom0 (Hypervisor) I can see what the supported values with qemu-system-x86_64 are. What I miss is some KVM related value in the list below. SLES12-RC1 ========== h05cni:~ # uname -a Linux h05cni 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux h05cni:~ # h05cni:~ # qemu-system-x86_64 -M ? h05cni:~ # qemu-system-x86_64 -machine help Supported machines are: pc-0.13 Standard PC (i440FX + PIIX, 1996) pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2.0) pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) pc-1.0 Standard PC (i440FX + PIIX, 1996) pc-q35-1.7 Standard PC (Q35 + ICH9, 2009) pc-1.1 Standard PC (i440FX + PIIX, 1996) q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2.0) pc-q35-2.0 Standard PC (Q35 + ICH9, 2009) pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) pc-i440fx-1.5 Standard PC (i440FX + PIIX, 1996) pc-0.14 Standard PC (i440FX + PIIX, 1996) pc-0.15 Standard PC (i440FX + PIIX, 1996) xenfv Xen Fully-virtualized PC pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) isapc ISA-only PC pc-0.10 Standard PC (i440FX + PIIX, 1996) pc-1.2 Standard PC (i440FX + PIIX, 1996) pc-0.11 Standard PC (i440FX + PIIX, 1996) pc-i440fx-1.7 Standard PC (i440FX + PIIX, 1996) pc-i440fx-1.6 Standard PC (i440FX + PIIX, 1996) none empty machine xenpv Xen Para-virtualized PC pc-q35-1.5 Standard PC (Q35 + ICH9, 2009) pc-q35-1.6 Standard PC (Q35 + ICH9, 2009) pc-0.12 Standard PC (i440FX + PIIX, 1996) pc-1.3 Standard PC (i440FX + PIIX, 1996) h05cni:~ # SLES11-SP3 =========== h062rm:~ # uname -a Linux h062rm 3.0.101-0.31-default #1 SMP Wed Jun 4 08:59:53 UTC 2014 (87c5279) x86_64 x86_64 x86_64 GNU/Linux h062rm:~ # qemu-kvm -machine help Supported machines are: q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-1.4) pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-1.4) pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) pc-1.3 Standard PC pc-1.2 Standard PC pc-1.1 Standard PC pc-1.0 Standard PC pc-0.15 Standard PC pc-0.14 Standard PC pc-0.13 Standard PC pc-0.12 Standard PC pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu 0.10 isapc ISA-only PC none empty machine h062rm:~ # -machine [type=]name[,prop[=value][,...]] selects emulated machine ('-machine help' for list) property accel=accel1[:accel2[:...]] selects accelerator supported accelerators are kvm, xen, tcg (default: tcg) kernel_irqchip=on|off controls accelerated irqchip support kvm_shadow_mem=size of KVM shadow MMU dump-guest-core=on|off include guest memory in a core dump (default=on) mem-merge=on|off controls memory merge support (default: on) Maybe you misunderstood: I did set up this KVM client with the graphical virt-install and also using vm-install. But there setting of HW type I could not find yet So obviously the graphical tools do detect by their own setting the default value. What I noted is that obviously dmidecode does show a different result from SLES11-SP3 & SLES12 Beta9 towards SLES12 RC1 So it is not Qemu, which has changed its behavior, but dmidecode which now does obviously show the real qemu standard value. And because dmidecode has changed, facter does also show the "real" qemu default value. Best regards Urs Frey????????????????????????????????????????????? Post CH AG Informationstechnologie IT Betrieb Webergutstrasse 12 3030 Bern (Zollikofen) Telefon : ++41 (0)58 338 58 70 FAX???? : ++41 (0)58 667 30 07 E-Mail:?? urs.frey at post.ch -----Urspr?ngliche Nachricht----- Von: Antoine Ginies [mailto:aginies at suse.com] Gesendet: Tuesday, August 19, 2014 4:37 PM An: Frey Urs, IT222 Cc: sles-beta at lists.suse.com Betreff: Re: [sles-beta] SLES12 RC1 KVM client Productname urs.frey at post.ch: > Hi Hello, > > When set up a SLES12 RC1 as KVM client, the productname looks quite strange now > > h05cnh:~ # facter | grep product > productname => Standard PC (i440FX + PIIX, 1996) > h05cnh:~ # uname -a > Linux h05cnh 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux > h05cnh:~ # facter | grep product > productname => Standard PC (i440FX + PIIX, 1996) > h05cnh:~ # dmidecode | grep Product > Product Name: Standard PC (i440FX + PIIX, 1996) > h05cnh:~ # facter | grep virtual > is_virtual => true > virtual => kvm > h05cnh:~ # By default machine type for Qemu is pc-i440fx: under SLES12 host: linux-x61s:~ # qemu-system-x86_64 -M ? | grep default pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) and under SLE11SP3 host: pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > Until Beta9 the output was different and more readable, as it was under SLES11-SP3 > h039ua:~ # uname -a > Linux h039ua 3.12.22-2-default #1 SMP Fri Jun 13 13:46:18 UTC 2014 (ee1c2a2) x86_64 x86_64 x86_64 GNU/Linux > h039ua:~ # facter | grep product > productname => Bochs > h039ua:~ # dmidecode | grep Product > Product Name: Bochs > h039ua:~ # facter | grep virtual > is_virtual => true > virtual => kvm > h039ua:~ # > > Could this new unusual product name coming with SLES12-RC1 be considered as I bug? Machine type can be changed. So it's not a bug if the machine type was specified to launch the VM guest. regards. > I was searching also in SUSEConnect to see how KVM gets recognized and will be treated to be registered as virtual client > > I'll appreciate to get more information about this new product naming of a KVM client > > And also of course to get a hint where about to search in the code of suse connect to get an idea about hovKVM gets treated please > Thank you very much > > Best regards > > Urs Frey > Post CH AG > Informationstechnologie > IT Betrieb > Webergutstrasse 12 > 3030 Bern (Zollikofen) > Telefon : ++41 (0)58 338 58 70 > FAX : ++41 (0)58 667 30 07 > E-Mail: urs.frey at post.ch > > > > _______________________________________________ > sles-beta mailing list > sles-beta at lists.suse.com > http://lists.suse.com/mailman/listinfo/sles-beta -- Antoine Ginies Project Manager SUSE France From aginies at suse.com Tue Aug 19 11:49:01 2014 From: aginies at suse.com (Antoine Ginies) Date: Tue, 19 Aug 2014 19:49:01 +0200 Subject: [sles-beta] SLES12 RC1 KVM client Productname In-Reply-To: <40637DBB36AF3941B243A286A432CA0B0F9D0C48@HXMB12.pnet.ch> References: <40637DBB36AF3941B243A286A432CA0B0F9C8574@HXMB12.pnet.ch> <20140819143724.GA19292@linux-w520.guibland.com> <40637DBB36AF3941B243A286A432CA0B0F9D0C48@HXMB12.pnet.ch> Message-ID: <20140819174901.GA20663@linux-w520.guibland.com> urs.frey at post.ch: > >Antoine Ginies > Hi Antoine > > Thank you very much for your answer, I have read with very high interest. > > >By default machine type for Qemu is pc-i440fx: > >under SLES12 host: > >linux-x61s:~ # qemu-system-x86_64 -M ? | grep default > >pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > > > >and under SLE11SP3 host: > >pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > > >Machine type can be changed. So it's not a bug if the machine type was > >specified to launch the VM guest. > > ON my KVM Dom0 (Hypervisor) I can see what the supported values with qemu-system-x86_64 are. > What I miss is some KVM related value in the list below. > > SLES12-RC1 > ========== > h05cni:~ # uname -a > Linux h05cni 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux > h05cni:~ # > h05cni:~ # qemu-system-x86_64 -M ? > h05cni:~ # qemu-system-x86_64 -machine help > Supported machines are: > pc-0.13 Standard PC (i440FX + PIIX, 1996) > pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2.0) > pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > pc-1.0 Standard PC (i440FX + PIIX, 1996) > pc-q35-1.7 Standard PC (Q35 + ICH9, 2009) > pc-1.1 Standard PC (i440FX + PIIX, 1996) > q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2.0) > pc-q35-2.0 Standard PC (Q35 + ICH9, 2009) > pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) > pc-i440fx-1.5 Standard PC (i440FX + PIIX, 1996) > pc-0.14 Standard PC (i440FX + PIIX, 1996) > pc-0.15 Standard PC (i440FX + PIIX, 1996) > xenfv Xen Fully-virtualized PC > pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) > isapc ISA-only PC > pc-0.10 Standard PC (i440FX + PIIX, 1996) > pc-1.2 Standard PC (i440FX + PIIX, 1996) > pc-0.11 Standard PC (i440FX + PIIX, 1996) > pc-i440fx-1.7 Standard PC (i440FX + PIIX, 1996) > pc-i440fx-1.6 Standard PC (i440FX + PIIX, 1996) > none empty machine > xenpv Xen Para-virtualized PC > pc-q35-1.5 Standard PC (Q35 + ICH9, 2009) > pc-q35-1.6 Standard PC (Q35 + ICH9, 2009) > pc-0.12 Standard PC (i440FX + PIIX, 1996) > pc-1.3 Standard PC (i440FX + PIIX, 1996) > h05cni:~ # > > SLES11-SP3 > =========== > h062rm:~ # uname -a > Linux h062rm 3.0.101-0.31-default #1 SMP Wed Jun 4 08:59:53 UTC 2014 (87c5279) x86_64 x86_64 x86_64 GNU/Linux > h062rm:~ # qemu-kvm -machine help > Supported machines are: > q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-1.4) > pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) > pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-1.4) > pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > pc-1.3 Standard PC > pc-1.2 Standard PC > pc-1.1 Standard PC > pc-1.0 Standard PC > pc-0.15 Standard PC > pc-0.14 Standard PC > pc-0.13 Standard PC > pc-0.12 Standard PC > pc-0.11 Standard PC, qemu 0.11 > pc-0.10 Standard PC, qemu 0.10 > isapc ISA-only PC > none empty machine > h062rm:~ # > > -machine [type=]name[,prop[=value][,...]] > selects emulated machine ('-machine help' for list) > property accel=accel1[:accel2[:...]] selects accelerator > supported accelerators are kvm, xen, tcg (default: tcg) > kernel_irqchip=on|off controls accelerated irqchip support > kvm_shadow_mem=size of KVM shadow MMU > dump-guest-core=on|off include guest memory in a core dump (default=on) > mem-merge=on|off controls memory merge support (default: on) > > Maybe you misunderstood: > I did set up this KVM client with the graphical virt-install and also using vm-install. > But there setting of HW type I could not find yet > So obviously the graphical tools do detect by their own setting the default value. Yes the graphical tool should use the default value, and it should be "pc-i440fx" by default. > What I noted is that obviously dmidecode does show a different result from SLES11-SP3 & SLES12 Beta9 towards SLES12 RC1 I didn't done any test under BETA9. Lets focus on SLE11SP3 and SLE12RC1 by default (with any changes). Default Machine type for VM guest must be "pc-i440fx" if you have done the installation using "virt-install" or "vm-install" tool. Once the installation is done, the libvirt configuration will be in: /etc/libvirt/qemu if you grep "machine" in the xml file: linux-x61s:/etc/libvirt/qemu # grep machine sles11.xml hvm of course you can change this value using virsh (VM guest should be off): ******************** 1) virsh -c qemu:///system 2) virsh # list --inactive Id Name State ---------------------------------------------------- - sles11 shut off 3) virsh # edit sles11 ******************** 4) change the value between tag, in our example we change the value to "pc-q35-2.0": hvm ******************** Domain sles11 XML configuration not changed. 5) virsh # start sles11 ******************** log on the guest and check machine/Product: ******************** 6) linux-ed3a:~ # dmidecode | grep Prod Product Name: Standard PC (Q35 + ICH9, 2009) ******************** > So it is not Qemu, which has changed its behavior, but dmidecode which now does obviously show the real qemu standard value. > And because dmidecode has changed, facter does also show the "real" qemu default value. i don't know how your machine type ihas changed, but but something has changed the value in the libvirt xml configuration, so the VM guest machine type reported by dmidecode was altered. regards. > -----Urspr?ngliche Nachricht----- > Von: Antoine Ginies [mailto:aginies at suse.com] > Gesendet: Tuesday, August 19, 2014 4:37 PM > An: Frey Urs, IT222 > Cc: sles-beta at lists.suse.com > Betreff: Re: [sles-beta] SLES12 RC1 KVM client Productname > > urs.frey at post.ch: > > Hi > > Hello, > > > > > When set up a SLES12 RC1 as KVM client, the productname looks quite strange now > > > > h05cnh:~ # facter | grep product > > productname => Standard PC (i440FX + PIIX, 1996) > > h05cnh:~ # uname -a > > Linux h05cnh 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux > > h05cnh:~ # facter | grep product > > productname => Standard PC (i440FX + PIIX, 1996) > > h05cnh:~ # dmidecode | grep Product > > Product Name: Standard PC (i440FX + PIIX, 1996) > > h05cnh:~ # facter | grep virtual > > is_virtual => true > > virtual => kvm > > h05cnh:~ # > > By default machine type for Qemu is pc-i440fx: > under SLES12 host: > linux-x61s:~ # qemu-system-x86_64 -M ? | grep default > pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > > and under SLE11SP3 host: > pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > > > > Until Beta9 the output was different and more readable, as it was under SLES11-SP3 > > h039ua:~ # uname -a > > Linux h039ua 3.12.22-2-default #1 SMP Fri Jun 13 13:46:18 UTC 2014 (ee1c2a2) x86_64 x86_64 x86_64 GNU/Linux > > h039ua:~ # facter | grep product > > productname => Bochs > > h039ua:~ # dmidecode | grep Product > > Product Name: Bochs > > h039ua:~ # facter | grep virtual > > is_virtual => true > > virtual => kvm > > h039ua:~ # > > > > Could this new unusual product name coming with SLES12-RC1 be considered as I bug? > > Machine type can be changed. So it's not a bug if the machine type was > specified to launch the VM guest. > > > regards. > > > I was searching also in SUSEConnect to see how KVM gets recognized and will be treated to be registered as virtual client > > > > I'll appreciate to get more information about this new product naming of a KVM client > > > > And also of course to get a hint where about to search in the code of suse connect to get an idea about hovKVM gets treated please > > Thank you very much > > > > Best regards > > > > Urs Frey > > Post CH AG > > Informationstechnologie > > IT Betrieb > > Webergutstrasse 12 > > 3030 Bern (Zollikofen) > > Telefon : ++41 (0)58 338 58 70 > > FAX : ++41 (0)58 667 30 07 > > E-Mail: urs.frey at post.ch > > > > > > > > > _______________________________________________ > > sles-beta mailing list > > sles-beta at lists.suse.com > > http://lists.suse.com/mailman/listinfo/sles-beta > > > -- > Antoine Ginies > Project Manager > SUSE France -- Antoine Ginies Project Manager SUSE France From darrent at akurit.com.au Tue Aug 19 15:51:16 2014 From: darrent at akurit.com.au (Darren Thompson) Date: Wed, 20 Aug 2014 07:51:16 +1000 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? Message-ID: Team Just thinking out loud, please ignore if not relevant to you... I know that you can do an "in place" migration from SLES11 to SLES12 I know that you can do an "in place" migration of ext3 (the SLES11 default) to BTRFS (the SLES12 default) Done correctly an in-place SLES11 => SLES12 should produce a server that is indistinguishable from a clean install of SLES12 (except for retaining data etc) How do you set up the other "default" BTRFS sub-volumes and how do you move/migrate the existing subdirectories into the default BTRFS sub-volumes. Is there a migration tool/script that would help to "complete" the ext3 => BTRFS migration process? Darren Thompson Professional Services Engineer / Consultant *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* Level 3, 60 City Road Southgate, VIC 3006 Mb: 0400 640 414 Mail: darrent at akurit.com.au Web: www.akurit.com.au -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3692 bytes Desc: not available URL: From mge at suse.com Tue Aug 19 16:48:00 2014 From: mge at suse.com (Matthias G. Eckermann) Date: Wed, 20 Aug 2014 00:48:00 +0200 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: References: Message-ID: <20140819224800.GC31195@suse.com> Hello Darren and all, On 2014-08-20 T 07:51 +1000 Darren Thompson wrote: > Just thinking out loud, please ignore if not relevant > to you... the question indeed is important, thus let me answer in two steps: I. What we do (not) support and why (not) II. How you may achieve (soon) what you are looking for Ad I. What we do (not) support and why (not) Let me share some SUSE internal discussion (from 2012 already): Back then we discussed, if we should support the in place migration of SLES 11 to SLES 12 including a migration of ext3 to btrfs. And we decided against this for one simple reason: complexity. If you look at the changes from SLES 11 to SLES 12: - default filesystem ext3 -> btrfs - bootloader grub1 -> grub2 - init system sysvinit -> systemd - network ifcfg -> wicked ... The "logic" how to migrate things is built into YaST (and AutoYaST / AutoUpgrade) respectively, and we do cover migration of bootloader, init system, network and more with this (YaST based) migration path. We hesitated to support an inplace migration from ext3 to btrfs for the "/" filesystem, though, and we are not supporting it. Why? 1. The inplace migration of ext3 to btrfs requires some free space on the source file system. 2. The inplace migration only works for ext3 filesystems which have been created with specific options; this for example does not always work for filesystems created by SLES 10 and early versions of SLES 11. 3. The disk partitioning must be prepared to have space for the Grub2 first- and second-stages. This is not true for all partitionings created with older SLES versions (including early SLES 11 versions). 4. People who are thinking about migrating "/" to btrfs most probably want to do so, to enable snapshot/ rollback as the _real_ benefit. For _this_ to work, though, you would get rid of the the "/boot" partition, as for rollback to work "/boot" must be on the same file system as "/", and it not even can be a btrfs subvolume, ... Now, if "/" would be a migratable ext3 filesystem and if it would have enough space and even enough space to cover /boot, and if the bootloader pieces would fit on the disk, and if you would be able to run an AutoYaST profile on it, to help migrating the other stuff mentioned above, ... => Too many "ifs". Too many ifs to test this, and too many ifs to make this reliably working and supported. > I know that you can do an "in place" migration from > SLES11 to SLES12 I know that you can do an "in place" > migration of ext3 (the SLES11 default) to BTRFS (the > SLES12 default) > > Done correctly an in-place SLES11 => SLES12 should > produce a server that is indistinguishable from a clean > install of SLES12 (except for retaining data etc) I agree that as a "manual" process you might be able to migrate a SLES 11 SP3 system to SLES 12 this way. It's an error prone process, and you may end up losing data, e.g. if you have to re-partition to create space for the Grub2 bootloader pieces on your disk. However,let's think about what you really want. Let's start more theoretically ... II. How you may achieve (soon) what you are looking for You want a SLES 12 system, with snapshot/rollback for the "full system" without the need to reconfigure and tweak everything you did configure and tweak for SLES 11. Is this the correct understanding? > How do you set up the other "default" BTRFS > sub-volumes and how do you move/migrate the existing > subdirectories into the default BTRFS sub-volumes. > > Is there a migration tool/script that would help to > "complete" the ext3 => BTRFS migration process? If yes, there are obviously two ways to achieve your goal: 1. In place migration; as discussed above, and not supportable / not supported. 2. A way to preserve your complex configurations (which you did in SLES 11 ) and re-apply them on SLES 12. Some tool, which - "inspects" your SLES 11 system - "validates" and normalizes your SLES 11 configuration - allows you to "show" and change the configuration - helps you to "build" a SLES 12 system based on your (former) configuration (using KIWI). Do you remember the steps of "inspect" / "validate" / "show" / "build"? It's what the tool "machinery" in the "Advanced Systems Management Module" is meant for. And while the tool is not fully there yet, where we want it to be, it is the most promising way of migrating complex configurations between operating system versions or even operating systems going forward. Hope this helps to explain SUSE's plans in this area. So long - MgE -- Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) From darrent at akurit.com.au Tue Aug 19 18:01:44 2014 From: darrent at akurit.com.au (Darren Thompson) Date: Wed, 20 Aug 2014 10:01:44 +1000 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: <20140819224800.GC31195@suse.com> References: <20140819224800.GC31195@suse.com> Message-ID: Matthias Thank you for a truly comprehensive answer. I'm still unclear on one point though... I believe an "in place" update from SLES11 => SLES12 is tested and a viable option (please correct me if I'm incorrect). That leaves the default ext3 file-system from SLES11 so you miss out on the SLES12 roll-back advantage (as you have clearly explained). I was under the impression that there was a straight forward process to go from ext3 => BTRFS but I now see that there are a lot of "prerequisites" that may be more difficult to meet. Assuming that the SLES11 => SLES12 is done in one step and the ext3 file-system is migrated as a separate process, is there actually a guide as to what steps are required to achieve that e.g. {this is a guess at the process} 1. Boot from SLES12 media and go into "recovery mode" 2. Using appropriate tool make backup copy of '/', '/boot' and any other required file-systems. 3. Create appropriate BTRFS root filesystem using command line 'xxxx" which creates required '/' and "default' sub-volumes (including snapshots). 4. mount BTRFS '/' and sub-volumes in temporary location, restore backed up '/', '/boot' and other file-systems to new BTRFS file-system. 5. Chroot into BRTFS '/' and restore bootloader config etc **** or remount root filesystem *** Not sure of this part.... 6. Reboot 7. setup snapper config etc Is this close to correct and is it worth testing as a Beta tester??? Regards Darren 5. Darren Thompson Professional Services Engineer / Consultant *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* Level 3, 60 City Road Southgate, VIC 3006 Mb: 0400 640 414 Mail: darrent at akurit.com.au Web: www.akurit.com.au On 20 August 2014 08:48, Matthias G. Eckermann wrote: > Hello Darren and all, > > On 2014-08-20 T 07:51 +1000 Darren Thompson wrote: > > > Just thinking out loud, please ignore if not relevant > > to you... > > the question indeed is important, thus let me answer in > two steps: > I. What we do (not) support and why (not) > II. How you may achieve (soon) what you are looking for > > > Ad I. What we do (not) support and why (not) > > Let me share some SUSE internal discussion (from 2012 > already): Back then we discussed, if we should support > the in place migration of SLES 11 to SLES 12 including a > migration of ext3 to btrfs. > > And we decided against this for one simple reason: complexity. > > If you look at the changes from SLES 11 to SLES 12: > - default filesystem ext3 -> btrfs > - bootloader grub1 -> grub2 > - init system sysvinit -> systemd > - network ifcfg -> wicked > ... > > The "logic" how to migrate things is built into YaST (and > AutoYaST / AutoUpgrade) respectively, and we do cover > migration of bootloader, init system, network and more > with this (YaST based) migration path. > > We hesitated to support an inplace migration from ext3 to > btrfs for the "/" filesystem, though, and we are not > supporting it. Why? > 1. The inplace migration of ext3 to btrfs requires some > free space on the source file system. > 2. The inplace migration only works for ext3 filesystems > which have been created with specific options; this > for example does not always work for filesystems > created by SLES 10 and early versions of SLES 11. > 3. The disk partitioning must be prepared to have space > for the Grub2 first- and second-stages. This is not > true for all partitionings created with older SLES > versions (including early SLES 11 versions). > 4. People who are thinking about migrating "/" to btrfs > most probably want to do so, to enable snapshot/ > rollback as the _real_ benefit. > For _this_ to work, though, you would get rid of the > the "/boot" partition, as for rollback to work > "/boot" must be on the same file system as "/", > and it not even can be a btrfs subvolume, ... > > Now, if "/" would be a migratable ext3 filesystem and if > it would have enough space and even enough space to cover > /boot, and if the bootloader pieces would fit on the disk, > and if you would be able to run an AutoYaST profile on it, > to help migrating the other stuff mentioned above, ... > > => Too many "ifs". Too many ifs to test this, and too many > ifs to make this reliably working and supported. > > > I know that you can do an "in place" migration from > > SLES11 to SLES12 I know that you can do an "in place" > > migration of ext3 (the SLES11 default) to BTRFS (the > > SLES12 default) > > > > Done correctly an in-place SLES11 => SLES12 should > > produce a server that is indistinguishable from a clean > > install of SLES12 (except for retaining data etc) > > I agree that as a "manual" process you might be able to > migrate a SLES 11 SP3 system to SLES 12 this way. > > It's an error prone process, and you may end up losing > data, e.g. if you have to re-partition to create space > for the Grub2 bootloader pieces on your disk. > > However,let's think about what you really want. Let's > start more theoretically ... > > > II. How you may achieve (soon) what you are looking for > > You want a SLES 12 system, with snapshot/rollback for the > "full system" without the need to reconfigure and tweak > everything you did configure and tweak for SLES 11. > > Is this the correct understanding? > > > How do you set up the other "default" BTRFS > > sub-volumes and how do you move/migrate the existing > > subdirectories into the default BTRFS sub-volumes. > > > > Is there a migration tool/script that would help to > > "complete" the ext3 => BTRFS migration process? > > If yes, there are obviously two ways to achieve your goal: > > 1. In place migration; as discussed above, and not > supportable / not supported. > > 2. A way to preserve your complex configurations (which > you did in SLES 11 ) and re-apply them on SLES 12. > Some tool, which > - "inspects" your SLES 11 system > - "validates" and normalizes your SLES 11 configuration > - allows you to "show" and change the configuration > - helps you to "build" a SLES 12 system based on your > (former) configuration (using KIWI). > > Do you remember the steps of "inspect" / "validate" / > "show" / "build"? > > It's what the tool "machinery" in the "Advanced Systems > Management Module" is meant for. And while the tool is not > fully there yet, where we want it to be, it is the most > promising way of migrating complex configurations between > operating system versions or even operating systems going > forward. > > Hope this helps to explain SUSE's plans in this area. > > So long - > MgE > > -- > Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise > Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com > SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany > GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3692 bytes Desc: not available URL: From mge at suse.com Tue Aug 19 20:28:52 2014 From: mge at suse.com (Matthias G. Eckermann) Date: Wed, 20 Aug 2014 04:28:52 +0200 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: References: <20140819224800.GC31195@suse.com> Message-ID: <20140820022852.GA2524@suse.com> Hello Darren and all, On 2014-08-20 T 10:01 +1000 Darren Thompson wrote: > I'm still unclear on one point though... > > I believe an "in place" update from SLES11 => SLES12 > is tested and a viable option (please correct me if > I'm incorrect). Yes, in place upgrade from SLES 11 => SLES 12 is supported. To compare with the "real world": when you upgrade your house to have new electricity, new water pipes, new central heating and new windows, all "in place" (instead of the old), you still do not change the floor plan of your house. > That leaves the default ext3 file-system from SLES11 > so you miss out on the SLES12 roll-back advantage (as > you have clearly explained). Yes. To remain in the picture: to do that, you would have to create a new basement slab -- while your house and all the upper floors remain not yet upgraded -- and change (parts of) the floor plan. ... Well ... > I was under the impression that there was a straight > forward process to go from ext3 => BTRFS but I now see > that there are a lot of "prerequisites" that may be > more difficult to meet. We do support "offline in place" migration from ext3 => btrfs for data partitions. > Assuming that the SLES11 => SLES12 is done in one step > and the ext3 file-system is migrated as a separate > process, is there actually a guide as to what steps > are required to achieve that e.g. {this is a guess at > the process} [....] > Is this close to correct and is it worth testing as a > Beta tester??? Well, as said in my E-Mail before: instead of pursuing this kind of excercise or migration path, I suggest to have a look at "machinery" and "kiwi", as this is what will be supported, once machinery is out of the TechPreview status. As a benefit, going forward this can not only be used to go from 11 to 12, but also from physical to virtual or cloud or vice versa. Lots of options, ... Enjoy! so long - MgE > On 20 August 2014 08:48, Matthias G. Eckermann wrote: > > > Hello Darren and all, > > > > On 2014-08-20 T 07:51 +1000 Darren Thompson wrote: > > > > > Just thinking out loud, please ignore if not relevant > > > to you... > > > > the question indeed is important, thus let me answer in > > two steps: > > I. What we do (not) support and why (not) > > II. How you may achieve (soon) what you are looking for > > > > > > Ad I. What we do (not) support and why (not) > > > > Let me share some SUSE internal discussion (from 2012 > > already): Back then we discussed, if we should support > > the in place migration of SLES 11 to SLES 12 including a > > migration of ext3 to btrfs. > > > > And we decided against this for one simple reason: complexity. > > > > If you look at the changes from SLES 11 to SLES 12: > > - default filesystem ext3 -> btrfs > > - bootloader grub1 -> grub2 > > - init system sysvinit -> systemd > > - network ifcfg -> wicked > > ... > > > > The "logic" how to migrate things is built into YaST (and > > AutoYaST / AutoUpgrade) respectively, and we do cover > > migration of bootloader, init system, network and more > > with this (YaST based) migration path. > > > > We hesitated to support an inplace migration from ext3 to > > btrfs for the "/" filesystem, though, and we are not > > supporting it. Why? > > 1. The inplace migration of ext3 to btrfs requires some > > free space on the source file system. > > 2. The inplace migration only works for ext3 filesystems > > which have been created with specific options; this > > for example does not always work for filesystems > > created by SLES 10 and early versions of SLES 11. > > 3. The disk partitioning must be prepared to have space > > for the Grub2 first- and second-stages. This is not > > true for all partitionings created with older SLES > > versions (including early SLES 11 versions). > > 4. People who are thinking about migrating "/" to btrfs > > most probably want to do so, to enable snapshot/ > > rollback as the _real_ benefit. > > For _this_ to work, though, you would get rid of the > > the "/boot" partition, as for rollback to work > > "/boot" must be on the same file system as "/", > > and it not even can be a btrfs subvolume, ... > > > > Now, if "/" would be a migratable ext3 filesystem and if > > it would have enough space and even enough space to cover > > /boot, and if the bootloader pieces would fit on the disk, > > and if you would be able to run an AutoYaST profile on it, > > to help migrating the other stuff mentioned above, ... > > > > => Too many "ifs". Too many ifs to test this, and too many > > ifs to make this reliably working and supported. > > > > > I know that you can do an "in place" migration from > > > SLES11 to SLES12 I know that you can do an "in place" > > > migration of ext3 (the SLES11 default) to BTRFS (the > > > SLES12 default) > > > > > > Done correctly an in-place SLES11 => SLES12 should > > > produce a server that is indistinguishable from a clean > > > install of SLES12 (except for retaining data etc) > > > > I agree that as a "manual" process you might be able to > > migrate a SLES 11 SP3 system to SLES 12 this way. > > > > It's an error prone process, and you may end up losing > > data, e.g. if you have to re-partition to create space > > for the Grub2 bootloader pieces on your disk. > > > > However,let's think about what you really want. Let's > > start more theoretically ... > > > > > > II. How you may achieve (soon) what you are looking for > > > > You want a SLES 12 system, with snapshot/rollback for the > > "full system" without the need to reconfigure and tweak > > everything you did configure and tweak for SLES 11. > > > > Is this the correct understanding? > > > > > How do you set up the other "default" BTRFS > > > sub-volumes and how do you move/migrate the existing > > > subdirectories into the default BTRFS sub-volumes. > > > > > > Is there a migration tool/script that would help to > > > "complete" the ext3 => BTRFS migration process? > > > > If yes, there are obviously two ways to achieve your goal: > > > > 1. In place migration; as discussed above, and not > > supportable / not supported. > > > > 2. A way to preserve your complex configurations (which > > you did in SLES 11 ) and re-apply them on SLES 12. > > Some tool, which > > - "inspects" your SLES 11 system > > - "validates" and normalizes your SLES 11 configuration > > - allows you to "show" and change the configuration > > - helps you to "build" a SLES 12 system based on your > > (former) configuration (using KIWI). > > > > Do you remember the steps of "inspect" / "validate" / > > "show" / "build"? > > > > It's what the tool "machinery" in the "Advanced Systems > > Management Module" is meant for. And while the tool is not > > fully there yet, where we want it to be, it is the most > > promising way of migrating complex configurations between > > operating system versions or even operating systems going > > forward. > > > > Hope this helps to explain SUSE's plans in this area. > > > > So long - > > MgE > > > > -- > > Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise > > Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com > > SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany > > GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) > > -- Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) From darrent at akurit.com.au Tue Aug 19 20:34:57 2014 From: darrent at akurit.com.au (Darren Thompson) Date: Wed, 20 Aug 2014 12:34:57 +1000 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: <53F40645.5080106@lusolabs.com> References: <20140819224800.GC31195@suse.com> <53F40645.5080106@lusolabs.com> Message-ID: Filipe Thank you, again a full and thorough reply... It's good to know I'm not the only one considering these options... Darren Darren Thompson Professional Services Engineer / Consultant *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* Level 3, 60 City Road Southgate, VIC 3006 Mb: 0400 640 414 Mail: darrent at akurit.com.au Web: www.akurit.com.au On 20 August 2014 12:21, Filipe Lacerda wrote: > Hello Darren, > > Sorry to jump into the thread. I did that question outside the list, some > weeks ago in Dusseldorf, while on a training at SUSE. > > Basically, a direct "in place" migration / live upgrade, will not be > supported by default. It is possible to do it if you follow the "if's" list > that Mathias wrote, and some more from your own scenarios. But due to the > unlimited scenarios that you could create with other SLES versions, as well > as the big changes that SLES 12 offers over the previous ones, a "regular" > and default acceptance / supported path is not possible. I think I'm not > wrong, am I Mathias? > > However, if for a particular landscape / scenario, you find a way to do > it, meaning, if you create your own protocol, and managed to do it all the > way by yourself, and the final result is a working machine / system, and it > is able to run the *supportconfig* tool , and the support accepts the > result as a valid machine / system, you will end up with a supported > system. But it is a case by case possibility, and up to the SUSE support to > decide if the result is support or not. But you will have to do the hard > lifting by yourself. > > I had this in the past when migrating a big infrastructure from SLES 10 to > SLES 11. Due to the client architecture done by a previous supplier, it was > not possible an upgrade "in place" for those landscapes. SUSE support said > primarily that the upgrade was not supported and a full fresh install had > to be done for each system. Finally, I , together with SUSE support, > managed to get an agreement. I did the hard lifting, and the final result, > acceptable for me an my client, was presented to SUSE support. After > supportconfig tool was run, the systems were analysed and accepted as > valid, and therefore they were able to continue as supported. > > As I mentioned before, this is a SUSE support decision, and case by case. > Therefore, by default, it is a no. > > From my experience, create a fresh install, and copy configs over the wire > ( wicked will, as we already seen, read old style config files for > networking) and data to the new volumes. Then your up to go. Or, create a > fresh install, from the previous SLES version (same of the machine to be > migrated) with the config, prerequisites and "if's" that were mentioned, > copy data and configs over the wire, and then make the "in place" migration. > > I would love not to have to passed trough that again :) but in my case, i > had no choice but to go as i explained. If you have a major client, or your > own infrastructure to migrate, big, and fresh install is just not > acceptable, either reason, go ahead and test it :) Otherwise, save your > time. :) My two cents. > > There is a list of must have volumes, and partition scheme, for you to be > able to get the "in place" upgrade working, but i cannot find it right now. > This would be most helpful for you, if you want to create your own protocol > to test the migration. Mathias might have it, or someone at SUSE ?! > > One note, Mathias talked about "Machinery". I've tested this new module. > It is impressive. It will make me say goodbye to my custom scripts, for > audit and data gathering of an infrastructure, and create a regular way to > do it with almost no effort. And it is *scriptable* :) . From an > operations point of view, it will standardize a lot of procedures for > collect information , and to replicate that system using gathered > information. > > Another note. If you install a SLES 12 and accept default partition > scheme, it will be supported and with system rollback functionality and > further support options. But if you/we decide to change the partition > scheme / layout, rollback functionality might be broken and some forms of > support might not be accepted. > > Hope i could be of some assistance. > > Cheers, > > > -- > ------------------------------ > *Filipe Lacerda* > > *Chief Technical Officer ISO 27001 Lead Auditor* Tel: (+351)211 201 650 > Fax: (+351)211 201 634 Tmv: (+351)91 812 01 24 MIPE - Tecnologias de > Informa??o Lda Avenida da Liberdade, 36 - 6? 1250-145 Lisboa > http://www.lusolabs.com > > Esta mensagem ? confidencial e ? propriedade da MIPE - Tecnologias de > Informa??o Lda. Destina-se unicamente ao conhecimento do destinat?rio nela > identificado. Caso n?o seja o destinat?rio, n?o est? autorizado a ler, > reter, imprimir, copiar, divulgar, distribuir ou utiliza-la, parcial ou > totalmente. Caso tenha recebido esta mensagem indevidamente, queira > informar de imediato o remetente e proceder ? destrui??o de todas as c?pias > da mesma. Esta mensagem e ficheiros anexos foram tratados para estarem > livres de v?rus ou de qualquer outro defeito. MIPE - Tecnologias de > Informa??o Lda n?o ? respons?vel por perdas e danos resultantes do uso > desta mensagem. O correio electr?nico via Internet n?o permite assegurar a > confidencialidade ou a correcta recep??o das mensagens. Para mais > informa??es acerca da MIPE por favor visite o nosso website em > http://www.lusolabs.com > > *P* Antes de imprimir este e-mail pense bem se tem mesmo que o fazer. H? > cada vez menos ?rvores! / Before printing this e-mail, assess if it is > really needed > > > On 20/08/14 01:01, Darren Thompson wrote: > > Matthias > > Thank you for a truly comprehensive answer. > > I'm still unclear on one point though... > > I believe an "in place" update from SLES11 => SLES12 is tested and a > viable option (please correct me if I'm incorrect). > > That leaves the default ext3 file-system from SLES11 so you miss out on > the SLES12 roll-back advantage (as you have clearly explained). > > I was under the impression that there was a straight forward process to > go from ext3 => BTRFS but I now see that there are a lot of "prerequisites" > that may be more difficult to meet. > > Assuming that the SLES11 => SLES12 is done in one step and the ext3 > file-system is migrated as a separate process, is there actually a guide as > to what steps are required to achieve that e.g. > {this is a guess at the process} > 1. Boot from SLES12 media and go into "recovery mode" > 2. Using appropriate tool make backup copy of '/', '/boot' and any other > required file-systems. > 3. Create appropriate BTRFS root filesystem using command line 'xxxx" > which creates required '/' and "default' sub-volumes (including snapshots). > 4. mount BTRFS '/' and sub-volumes in temporary location, restore backed > up '/', '/boot' and other file-systems to new BTRFS file-system. > 5. Chroot into BRTFS '/' and restore bootloader config etc **** or remount > root filesystem *** Not sure of this part.... > 6. Reboot > 7. setup snapper config etc > > Is this close to correct and is it worth testing as a Beta tester??? > > Regards > Darren > > 5. > > > Darren Thompson > > Professional Services Engineer / Consultant > > *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* > > Level 3, 60 City Road > > Southgate, VIC 3006 > > Mb: 0400 640 414 > > Mail: darrent at akurit.com.au > Web: www.akurit.com.au > > > On 20 August 2014 08:48, Matthias G. Eckermann wrote: > >> Hello Darren and all, >> >> On 2014-08-20 T 07:51 +1000 Darren Thompson wrote: >> >> > Just thinking out loud, please ignore if not relevant >> > to you... >> >> the question indeed is important, thus let me answer in >> two steps: >> I. What we do (not) support and why (not) >> II. How you may achieve (soon) what you are looking for >> >> >> Ad I. What we do (not) support and why (not) >> >> Let me share some SUSE internal discussion (from 2012 >> already): Back then we discussed, if we should support >> the in place migration of SLES 11 to SLES 12 including a >> migration of ext3 to btrfs. >> >> And we decided against this for one simple reason: complexity. >> >> If you look at the changes from SLES 11 to SLES 12: >> - default filesystem ext3 -> btrfs >> - bootloader grub1 -> grub2 >> - init system sysvinit -> systemd >> - network ifcfg -> wicked >> ... >> >> The "logic" how to migrate things is built into YaST (and >> AutoYaST / AutoUpgrade) respectively, and we do cover >> migration of bootloader, init system, network and more >> with this (YaST based) migration path. >> >> We hesitated to support an inplace migration from ext3 to >> btrfs for the "/" filesystem, though, and we are not >> supporting it. Why? >> 1. The inplace migration of ext3 to btrfs requires some >> free space on the source file system. >> 2. The inplace migration only works for ext3 filesystems >> which have been created with specific options; this >> for example does not always work for filesystems >> created by SLES 10 and early versions of SLES 11. >> 3. The disk partitioning must be prepared to have space >> for the Grub2 first- and second-stages. This is not >> true for all partitionings created with older SLES >> versions (including early SLES 11 versions). >> 4. People who are thinking about migrating "/" to btrfs >> most probably want to do so, to enable snapshot/ >> rollback as the _real_ benefit. >> For _this_ to work, though, you would get rid of the >> the "/boot" partition, as for rollback to work >> "/boot" must be on the same file system as "/", >> and it not even can be a btrfs subvolume, ... >> >> Now, if "/" would be a migratable ext3 filesystem and if >> it would have enough space and even enough space to cover >> /boot, and if the bootloader pieces would fit on the disk, >> and if you would be able to run an AutoYaST profile on it, >> to help migrating the other stuff mentioned above, ... >> >> => Too many "ifs". Too many ifs to test this, and too many >> ifs to make this reliably working and supported. >> >> > I know that you can do an "in place" migration from >> > SLES11 to SLES12 I know that you can do an "in place" >> > migration of ext3 (the SLES11 default) to BTRFS (the >> > SLES12 default) >> > >> > Done correctly an in-place SLES11 => SLES12 should >> > produce a server that is indistinguishable from a clean >> > install of SLES12 (except for retaining data etc) >> >> I agree that as a "manual" process you might be able to >> migrate a SLES 11 SP3 system to SLES 12 this way. >> >> It's an error prone process, and you may end up losing >> data, e.g. if you have to re-partition to create space >> for the Grub2 bootloader pieces on your disk. >> >> However,let's think about what you really want. Let's >> start more theoretically ... >> >> >> II. How you may achieve (soon) what you are looking for >> >> You want a SLES 12 system, with snapshot/rollback for the >> "full system" without the need to reconfigure and tweak >> everything you did configure and tweak for SLES 11. >> >> Is this the correct understanding? >> >> > How do you set up the other "default" BTRFS >> > sub-volumes and how do you move/migrate the existing >> > subdirectories into the default BTRFS sub-volumes. >> > >> > Is there a migration tool/script that would help to >> > "complete" the ext3 => BTRFS migration process? >> >> If yes, there are obviously two ways to achieve your goal: >> >> 1. In place migration; as discussed above, and not >> supportable / not supported. >> >> 2. A way to preserve your complex configurations (which >> you did in SLES 11 ) and re-apply them on SLES 12. >> Some tool, which >> - "inspects" your SLES 11 system >> - "validates" and normalizes your SLES 11 configuration >> - allows you to "show" and change the configuration >> - helps you to "build" a SLES 12 system based on your >> (former) configuration (using KIWI). >> >> Do you remember the steps of "inspect" / "validate" / >> "show" / "build"? >> >> It's what the tool "machinery" in the "Advanced Systems >> Management Module" is meant for. And while the tool is not >> fully there yet, where we want it to be, it is the most >> promising way of migrating complex configurations between >> operating system versions or even operating systems going >> forward. >> >> Hope this helps to explain SUSE's plans in this area. >> >> So long - >> MgE >> >> -- >> Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise >> Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com >> SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany >> GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) >> > > > > _______________________________________________ > sles-beta mailing listsles-beta at lists.suse.comhttp://lists.suse.com/mailman/listinfo/sles-beta > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001k.png Type: image/jpeg Size: 5382 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3692 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3692 bytes Desc: not available URL: From darrent at akurit.com.au Tue Aug 19 20:40:14 2014 From: darrent at akurit.com.au (Darren Thompson) Date: Wed, 20 Aug 2014 12:40:14 +1000 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: <20140820022852.GA2524@suse.com> References: <20140819224800.GC31195@suse.com> <20140820022852.GA2524@suse.com> Message-ID: Matthias Thanks again, looks like I'm going to have to have a good look at the "Machine" module. I have seen "kiwi" in the context of a "JEOS" install, I'm not sure I understand it's role/involvement in a "inspect" / "validate" / "show" / "build" migration using 'machine'. Regards Darren Darren Thompson Professional Services Engineer / Consultant *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* Level 3, 60 City Road Southgate, VIC 3006 Mb: 0400 640 414 Mail: darrent at akurit.com.au Web: www.akurit.com.au On 20 August 2014 12:28, Matthias G. Eckermann wrote: > Hello Darren and all, > > On 2014-08-20 T 10:01 +1000 Darren Thompson wrote: > > > I'm still unclear on one point though... > > > > I believe an "in place" update from SLES11 => SLES12 > > is tested and a viable option (please correct me if > > I'm incorrect). > > Yes, in place upgrade from SLES 11 => SLES 12 is > supported. > > To compare with the "real world": when you upgrade your > house to have new electricity, new water pipes, new > central heating and new windows, all "in place" (instead > of the old), you still do not change the floor plan of > your house. > > > That leaves the default ext3 file-system from SLES11 > > so you miss out on the SLES12 roll-back advantage (as > > you have clearly explained). > > Yes. > > To remain in the picture: to do that, you would have to > create a new basement slab -- while your house and all > the upper floors remain not yet upgraded -- and change > (parts of) the floor plan. ... Well ... > > > I was under the impression that there was a straight > > forward process to go from ext3 => BTRFS but I now see > > that there are a lot of "prerequisites" that may be > > more difficult to meet. > > We do support "offline in place" migration from ext3 => > btrfs for data partitions. > > > Assuming that the SLES11 => SLES12 is done in one step > > and the ext3 file-system is migrated as a separate > > process, is there actually a guide as to what steps > > are required to achieve that e.g. {this is a guess at > > the process} > [....] > > Is this close to correct and is it worth testing as a > > Beta tester??? > > Well, as said in my E-Mail before: instead of pursuing > this kind of excercise or migration path, I suggest to > have a look at "machinery" and "kiwi", as this is what > will be supported, once machinery is out of the > TechPreview status. > > As a benefit, going forward this can not only be used to > go from 11 to 12, but also from physical to virtual or > cloud or vice versa. Lots of options, ... > > Enjoy! > > so long - > MgE > > > > On 20 August 2014 08:48, Matthias G. Eckermann wrote: > > > > > Hello Darren and all, > > > > > > On 2014-08-20 T 07:51 +1000 Darren Thompson wrote: > > > > > > > Just thinking out loud, please ignore if not relevant > > > > to you... > > > > > > the question indeed is important, thus let me answer in > > > two steps: > > > I. What we do (not) support and why (not) > > > II. How you may achieve (soon) what you are looking for > > > > > > > > > Ad I. What we do (not) support and why (not) > > > > > > Let me share some SUSE internal discussion (from 2012 > > > already): Back then we discussed, if we should support > > > the in place migration of SLES 11 to SLES 12 including a > > > migration of ext3 to btrfs. > > > > > > And we decided against this for one simple reason: complexity. > > > > > > If you look at the changes from SLES 11 to SLES 12: > > > - default filesystem ext3 -> btrfs > > > - bootloader grub1 -> grub2 > > > - init system sysvinit -> systemd > > > - network ifcfg -> wicked > > > ... > > > > > > The "logic" how to migrate things is built into YaST (and > > > AutoYaST / AutoUpgrade) respectively, and we do cover > > > migration of bootloader, init system, network and more > > > with this (YaST based) migration path. > > > > > > We hesitated to support an inplace migration from ext3 to > > > btrfs for the "/" filesystem, though, and we are not > > > supporting it. Why? > > > 1. The inplace migration of ext3 to btrfs requires some > > > free space on the source file system. > > > 2. The inplace migration only works for ext3 filesystems > > > which have been created with specific options; this > > > for example does not always work for filesystems > > > created by SLES 10 and early versions of SLES 11. > > > 3. The disk partitioning must be prepared to have space > > > for the Grub2 first- and second-stages. This is not > > > true for all partitionings created with older SLES > > > versions (including early SLES 11 versions). > > > 4. People who are thinking about migrating "/" to btrfs > > > most probably want to do so, to enable snapshot/ > > > rollback as the _real_ benefit. > > > For _this_ to work, though, you would get rid of the > > > the "/boot" partition, as for rollback to work > > > "/boot" must be on the same file system as "/", > > > and it not even can be a btrfs subvolume, ... > > > > > > Now, if "/" would be a migratable ext3 filesystem and if > > > it would have enough space and even enough space to cover > > > /boot, and if the bootloader pieces would fit on the disk, > > > and if you would be able to run an AutoYaST profile on it, > > > to help migrating the other stuff mentioned above, ... > > > > > > => Too many "ifs". Too many ifs to test this, and too many > > > ifs to make this reliably working and supported. > > > > > > > I know that you can do an "in place" migration from > > > > SLES11 to SLES12 I know that you can do an "in place" > > > > migration of ext3 (the SLES11 default) to BTRFS (the > > > > SLES12 default) > > > > > > > > Done correctly an in-place SLES11 => SLES12 should > > > > produce a server that is indistinguishable from a clean > > > > install of SLES12 (except for retaining data etc) > > > > > > I agree that as a "manual" process you might be able to > > > migrate a SLES 11 SP3 system to SLES 12 this way. > > > > > > It's an error prone process, and you may end up losing > > > data, e.g. if you have to re-partition to create space > > > for the Grub2 bootloader pieces on your disk. > > > > > > However,let's think about what you really want. Let's > > > start more theoretically ... > > > > > > > > > II. How you may achieve (soon) what you are looking for > > > > > > You want a SLES 12 system, with snapshot/rollback for the > > > "full system" without the need to reconfigure and tweak > > > everything you did configure and tweak for SLES 11. > > > > > > Is this the correct understanding? > > > > > > > How do you set up the other "default" BTRFS > > > > sub-volumes and how do you move/migrate the existing > > > > subdirectories into the default BTRFS sub-volumes. > > > > > > > > Is there a migration tool/script that would help to > > > > "complete" the ext3 => BTRFS migration process? > > > > > > If yes, there are obviously two ways to achieve your goal: > > > > > > 1. In place migration; as discussed above, and not > > > supportable / not supported. > > > > > > 2. A way to preserve your complex configurations (which > > > you did in SLES 11 ) and re-apply them on SLES 12. > > > Some tool, which > > > - "inspects" your SLES 11 system > > > - "validates" and normalizes your SLES 11 configuration > > > - allows you to "show" and change the configuration > > > - helps you to "build" a SLES 12 system based on your > > > (former) configuration (using KIWI). > > > > > > Do you remember the steps of "inspect" / "validate" / > > > "show" / "build"? > > > > > > It's what the tool "machinery" in the "Advanced Systems > > > Management Module" is meant for. And while the tool is not > > > fully there yet, where we want it to be, it is the most > > > promising way of migrating complex configurations between > > > operating system versions or even operating systems going > > > forward. > > > > > > Hope this helps to explain SUSE's plans in this area. > > > > > > So long - > > > MgE > > > > > > -- > > > Matthias G. Eckermann Senior Product Manager SUSE? Linux > Enterprise > > > Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: > mge at suse.com > > > SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg > Germany > > > GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG > N?rnberg) > > > > > > > -- > Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise > Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com > SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany > GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3692 bytes Desc: not available URL: From darrent at akurit.com.au Tue Aug 19 21:11:31 2014 From: darrent at akurit.com.au (Darren Thompson) Date: Wed, 20 Aug 2014 13:11:31 +1000 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: <53F40FB2.4010609@lusolabs.com> References: <20140819224800.GC31195@suse.com> <20140820022852.GA2524@suse.com> <53F40DDB.8060601@lusolabs.com> <53F40FB2.4010609@lusolabs.com> Message-ID: Filipe Excellent, thanks for the links... Looks like I've got some reading/researching to do ;-) Darren Darren Thompson Professional Services Engineer / Consultant *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* Level 3, 60 City Road Southgate, VIC 3006 Mb: 0400 640 414 Mail: darrent at akurit.com.au Web: www.akurit.com.au On 20 August 2014 13:02, Filipe Lacerda wrote: > Hi again, > > I've forgot this:) > > http://machinery-project.org/ > > https://github.com/SUSE/machinery > > Two links to know more about this 'Machinery' stuff. > > Cheers, > > > -- > ------------------------------ > *Filipe Lacerda* > > *Chief Technical Officer ISO 27001 Lead Auditor* Tel: (+351)211 201 650 > Fax: (+351)211 201 634 Tmv: (+351)91 812 01 24 MIPE - Tecnologias de > Informa??o Lda Avenida da Liberdade, 36 - 6? 1250-145 Lisboa > http://www.lusolabs.com > > Esta mensagem ? confidencial e ? propriedade da MIPE - Tecnologias de > Informa??o Lda. Destina-se unicamente ao conhecimento do destinat?rio nela > identificado. Caso n?o seja o destinat?rio, n?o est? autorizado a ler, > reter, imprimir, copiar, divulgar, distribuir ou utiliza-la, parcial ou > totalmente. Caso tenha recebido esta mensagem indevidamente, queira > informar de imediato o remetente e proceder ? destrui??o de todas as c?pias > da mesma. Esta mensagem e ficheiros anexos foram tratados para estarem > livres de v?rus ou de qualquer outro defeito. MIPE - Tecnologias de > Informa??o Lda n?o ? respons?vel por perdas e danos resultantes do uso > desta mensagem. O correio electr?nico via Internet n?o permite assegurar a > confidencialidade ou a correcta recep??o das mensagens. Para mais > informa??es acerca da MIPE por favor visite o nosso website em > http://www.lusolabs.com > > *P* Antes de imprimir este e-mail pense bem se tem mesmo que o fazer. H? > cada vez menos ?rvores! / Before printing this e-mail, assess if it is > really needed > > On 20/08/14 03:54, Filipe Lacerda wrote: > > Hi Darren, > > Install the "Advanced System Management" ISO available for download. > Create a REPO based on that. You will find the "Machinery" in there. > > As for the 'Kiwi' and 'Machinery' , my own interpretation ( might be wrong > dough) is that you inspect / validate / show the systems that you run > 'Machinery' into, and gather information about them, compare systems ( > feature to be?!), etc. > > Then use 'Kiwi' to build new systems based upon the collected information. > That's how SUSE Studio builds its images, using 'Kiwi' as backend. > > Now, how to do it? Still don't know :) But going to find out :) > > Cheers, > > > -- > ------------------------------ > *Filipe Lacerda* > > *Chief Technical Officer ISO 27001 Lead Auditor* Tel: (+351)211 201 650 > Fax: (+351)211 201 634 Tmv: (+351)91 812 01 24 MIPE - Tecnologias de > Informa??o Lda Avenida da Liberdade, 36 - 6? 1250-145 Lisboa > http://www.lusolabs.com > > Esta mensagem ? confidencial e ? propriedade da MIPE - Tecnologias de > Informa??o Lda. Destina-se unicamente ao conhecimento do destinat?rio nela > identificado. Caso n?o seja o destinat?rio, n?o est? autorizado a ler, > reter, imprimir, copiar, divulgar, distribuir ou utiliza-la, parcial ou > totalmente. Caso tenha recebido esta mensagem indevidamente, queira > informar de imediato o remetente e proceder ? destrui??o de todas as c?pias > da mesma. Esta mensagem e ficheiros anexos foram tratados para estarem > livres de v?rus ou de qualquer outro defeito. MIPE - Tecnologias de > Informa??o Lda n?o ? respons?vel por perdas e danos resultantes do uso > desta mensagem. O correio electr?nico via Internet n?o permite assegurar a > confidencialidade ou a correcta recep??o das mensagens. Para mais > informa??es acerca da MIPE por favor visite o nosso website em > http://www.lusolabs.com > > *P* Antes de imprimir este e-mail pense bem se tem mesmo que o fazer. H? > cada vez menos ?rvores! / Before printing this e-mail, assess if it is > really needed > > On 20/08/14 03:40, Darren Thompson wrote: > > Matthias > > Thanks again, looks like I'm going to have to have a good look at the > "Machine" module. > > I have seen "kiwi" in the context of a "JEOS" install, I'm not sure I > understand it's role/involvement in a "inspect" / "validate" / "show" / > "build" migration using 'machine'. > > Regards > Darren > > > > Darren Thompson > > Professional Services Engineer / Consultant > > *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* > > Level 3, 60 City Road > > Southgate, VIC 3006 > > Mb: 0400 640 414 > > Mail: darrent at akurit.com.au > Web: www.akurit.com.au > > > On 20 August 2014 12:28, Matthias G. Eckermann wrote: > >> Hello Darren and all, >> >> On 2014-08-20 T 10:01 +1000 Darren Thompson wrote: >> >> > I'm still unclear on one point though... >> > >> > I believe an "in place" update from SLES11 => SLES12 >> > is tested and a viable option (please correct me if >> > I'm incorrect). >> >> Yes, in place upgrade from SLES 11 => SLES 12 is >> supported. >> >> To compare with the "real world": when you upgrade your >> house to have new electricity, new water pipes, new >> central heating and new windows, all "in place" (instead >> of the old), you still do not change the floor plan of >> your house. >> >> > That leaves the default ext3 file-system from SLES11 >> > so you miss out on the SLES12 roll-back advantage (as >> > you have clearly explained). >> >> Yes. >> >> To remain in the picture: to do that, you would have to >> create a new basement slab -- while your house and all >> the upper floors remain not yet upgraded -- and change >> (parts of) the floor plan. ... Well ... >> >> > I was under the impression that there was a straight >> > forward process to go from ext3 => BTRFS but I now see >> > that there are a lot of "prerequisites" that may be >> > more difficult to meet. >> >> We do support "offline in place" migration from ext3 => >> btrfs for data partitions. >> >> > Assuming that the SLES11 => SLES12 is done in one step >> > and the ext3 file-system is migrated as a separate >> > process, is there actually a guide as to what steps >> > are required to achieve that e.g. {this is a guess at >> > the process} >> [....] >> > Is this close to correct and is it worth testing as a >> > Beta tester??? >> >> Well, as said in my E-Mail before: instead of pursuing >> this kind of excercise or migration path, I suggest to >> have a look at "machinery" and "kiwi", as this is what >> will be supported, once machinery is out of the >> TechPreview status. >> >> As a benefit, going forward this can not only be used to >> go from 11 to 12, but also from physical to virtual or >> cloud or vice versa. Lots of options, ... >> >> Enjoy! >> >> so long - >> MgE >> >> >> > On 20 August 2014 08:48, Matthias G. Eckermann wrote: >> > >> > > Hello Darren and all, >> > > >> > > On 2014-08-20 T 07:51 +1000 Darren Thompson wrote: >> > > >> > > > Just thinking out loud, please ignore if not relevant >> > > > to you... >> > > >> > > the question indeed is important, thus let me answer in >> > > two steps: >> > > I. What we do (not) support and why (not) >> > > II. How you may achieve (soon) what you are looking for >> > > >> > > >> > > Ad I. What we do (not) support and why (not) >> > > >> > > Let me share some SUSE internal discussion (from 2012 >> > > already): Back then we discussed, if we should support >> > > the in place migration of SLES 11 to SLES 12 including a >> > > migration of ext3 to btrfs. >> > > >> > > And we decided against this for one simple reason: complexity. >> > > >> > > If you look at the changes from SLES 11 to SLES 12: >> > > - default filesystem ext3 -> btrfs >> > > - bootloader grub1 -> grub2 >> > > - init system sysvinit -> systemd >> > > - network ifcfg -> wicked >> > > ... >> > > >> > > The "logic" how to migrate things is built into YaST (and >> > > AutoYaST / AutoUpgrade) respectively, and we do cover >> > > migration of bootloader, init system, network and more >> > > with this (YaST based) migration path. >> > > >> > > We hesitated to support an inplace migration from ext3 to >> > > btrfs for the "/" filesystem, though, and we are not >> > > supporting it. Why? >> > > 1. The inplace migration of ext3 to btrfs requires some >> > > free space on the source file system. >> > > 2. The inplace migration only works for ext3 filesystems >> > > which have been created with specific options; this >> > > for example does not always work for filesystems >> > > created by SLES 10 and early versions of SLES 11. >> > > 3. The disk partitioning must be prepared to have space >> > > for the Grub2 first- and second-stages. This is not >> > > true for all partitionings created with older SLES >> > > versions (including early SLES 11 versions). >> > > 4. People who are thinking about migrating "/" to btrfs >> > > most probably want to do so, to enable snapshot/ >> > > rollback as the _real_ benefit. >> > > For _this_ to work, though, you would get rid of the >> > > the "/boot" partition, as for rollback to work >> > > "/boot" must be on the same file system as "/", >> > > and it not even can be a btrfs subvolume, ... >> > > >> > > Now, if "/" would be a migratable ext3 filesystem and if >> > > it would have enough space and even enough space to cover >> > > /boot, and if the bootloader pieces would fit on the disk, >> > > and if you would be able to run an AutoYaST profile on it, >> > > to help migrating the other stuff mentioned above, ... >> > > >> > > => Too many "ifs". Too many ifs to test this, and too many >> > > ifs to make this reliably working and supported. >> > > >> > > > I know that you can do an "in place" migration from >> > > > SLES11 to SLES12 I know that you can do an "in place" >> > > > migration of ext3 (the SLES11 default) to BTRFS (the >> > > > SLES12 default) >> > > > >> > > > Done correctly an in-place SLES11 => SLES12 should >> > > > produce a server that is indistinguishable from a clean >> > > > install of SLES12 (except for retaining data etc) >> > > >> > > I agree that as a "manual" process you might be able to >> > > migrate a SLES 11 SP3 system to SLES 12 this way. >> > > >> > > It's an error prone process, and you may end up losing >> > > data, e.g. if you have to re-partition to create space >> > > for the Grub2 bootloader pieces on your disk. >> > > >> > > However,let's think about what you really want. Let's >> > > start more theoretically ... >> > > >> > > >> > > II. How you may achieve (soon) what you are looking for >> > > >> > > You want a SLES 12 system, with snapshot/rollback for the >> > > "full system" without the need to reconfigure and tweak >> > > everything you did configure and tweak for SLES 11. >> > > >> > > Is this the correct understanding? >> > > >> > > > How do you set up the other "default" BTRFS >> > > > sub-volumes and how do you move/migrate the existing >> > > > subdirectories into the default BTRFS sub-volumes. >> > > > >> > > > Is there a migration tool/script that would help to >> > > > "complete" the ext3 => BTRFS migration process? >> > > >> > > If yes, there are obviously two ways to achieve your goal: >> > > >> > > 1. In place migration; as discussed above, and not >> > > supportable / not supported. >> > > >> > > 2. A way to preserve your complex configurations (which >> > > you did in SLES 11 ) and re-apply them on SLES 12. >> > > Some tool, which >> > > - "inspects" your SLES 11 system >> > > - "validates" and normalizes your SLES 11 configuration >> > > - allows you to "show" and change the configuration >> > > - helps you to "build" a SLES 12 system based on your >> > > (former) configuration (using KIWI). >> > > >> > > Do you remember the steps of "inspect" / "validate" / >> > > "show" / "build"? >> > > >> > > It's what the tool "machinery" in the "Advanced Systems >> > > Management Module" is meant for. And while the tool is not >> > > fully there yet, where we want it to be, it is the most >> > > promising way of migrating complex configurations between >> > > operating system versions or even operating systems going >> > > forward. >> > > >> > > Hope this helps to explain SUSE's plans in this area. >> > > >> > > So long - >> > > MgE >> > > >> > > -- >> > > Matthias G. Eckermann Senior Product Manager SUSE? Linux >> Enterprise >> > > Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: >> mge at suse.com >> > > SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg >> Germany >> > > GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG >> N?rnberg) >> > > >> >> >> >> -- >> Matthias G. Eckermann Senior Product Manager SUSE? Linux Enterprise >> Phone: +49 30 44315731 Mobile: +49 179 2949448 E-Mail: mge at suse.com >> SUSE LINUX Products GmbH Maxfeldstra?e 5 90409 N?rnberg Germany >> GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg) >> > > > > _______________________________________________ > sles-beta mailing listsles-beta at lists.suse.comhttp://lists.suse.com/mailman/listinfo/sles-beta > > > > > > > _______________________________________________ > sles-beta mailing listsles-beta at lists.suse.comhttp://lists.suse.com/mailman/listinfo/sles-beta > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 5382 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001k.png Type: image/jpeg Size: 5382 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3692 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3692 bytes Desc: not available URL: From aj at suse.com Wed Aug 20 01:23:47 2014 From: aj at suse.com (Andreas Jaeger) Date: Wed, 20 Aug 2014 09:23:47 +0200 Subject: [sles-beta] Thinking about SLES11 migration testing, BTRFS and sub-volumes? In-Reply-To: References: <20140819224800.GC31195@suse.com> <20140820022852.GA2524@suse.com> <53F40DDB.8060601@lusolabs.com> <53F40FB2.4010609@lusolabs.com> Message-ID: <53F44D03.8060106@suse.com> On 08/20/2014 05:11 AM, Darren Thompson wrote: > Filipe > > Excellent, thanks for the links... > > Looks like I've got some reading/researching to do ;-) If you have questions about machinery, please open a new thread or join us on the machinery mailing list. We do have machinery developers on this list and presented machinery a couple of weeks ago here, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany GF: Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB16746 (AG N?rnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From darrent at akurit.com.au Wed Aug 20 01:39:05 2014 From: darrent at akurit.com.au (Darren Thompson) Date: Wed, 20 Aug 2014 17:39:05 +1000 Subject: [sles-beta] Creating XEN VM files to use with HA cluster In-Reply-To: References: Message-ID: Team Just completing this thread... If you need to get back to a xm compatible file... Step 1: virsh dumpxml GuestID > guest.xml Step 2: virsh domxml-to-native xen-xm guest.xml > guest This "guest" file can then be used with XEN HA or to create a xen domain with 'xm -create guest' In theory this should also work with 'xl -create guest' though I have not tested that part. Regards Darren Darren Thompson Professional Services Engineer / Consultant *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* Level 3, 60 City Road Southgate, VIC 3006 Mb: 0400 640 414 Mail: darrent at akurit.com.au Web: www.akurit.com.au On 2 June 2014 10:54, Darren Thompson wrote: > Team > > Found it, old dog needs to learn some new tricks ;-) > > "# virsh dumpxml *GuestID* > *guest.xml"* > > From: > https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/form-Virtualization-Managing_guests_with_virsh-Creating_a_virtual_machine_XML_dump_configuration_file.html > (Is there updated SLES12 documentation on this yet) > > Also, the original Xen OCF cluster resource cannot be used as it's hard > coded to "xm" which is depreciated. > > There is a new 'VirtualDomain' OCF file which is used for clustering > 'lib-virt' compatible VM's (which seems to support Xen, KVM etc). > > Darren > > > On 31 May 2014 15:58, Darren Thompson wrote: > >> Team >> >> I must be missing something obvious, but how do you get the VM config >> files for HA >> >> In SLES11+ the VM create tool created the configuration file or you could >> run "xm list {vmname} -l <= That;s from memory and my not be correct syntax. >> >> I have created a VM using the new virt-manager but it does not seem to >> have created a config file. >> >> The xl list command does not seem to "find" the VM when it's running and >> does not list it at all when stopped (I can only start it from virt-manager >> itself). >> >> Any hints/tips appreciated as I'm sure that I'm missing something that >> should be obvious. >> >> Darren >> >> >> -- >> >> Darren Thompson >> >> Professional Services Engineer / Consultant >> >> *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* >> >> Level 3, 60 City Road >> >> Southgate, VIC 3006 >> >> Mb: 0400 640 414 >> >> Mail: darrent at akurit.com.au >> Web: www.akurit.com.au >> > > > > -- > > Darren Thompson > > Professional Services Engineer / Consultant > > *[image: cid:image001.jpg at 01CB7C0C.6C6A2AE0]* > > Level 3, 60 City Road > > Southgate, VIC 3006 > > Mb: 0400 640 414 > > Mail: darrent at akurit.com.au > Web: www.akurit.com.au > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3692 bytes Desc: not available URL: From urs.frey at post.ch Wed Aug 20 04:07:22 2014 From: urs.frey at post.ch (urs.frey at post.ch) Date: Wed, 20 Aug 2014 10:07:22 +0000 Subject: [sles-beta] SLES12 RC1 KVM client Productname In-Reply-To: <20140819174901.GA20663@linux-w520.guibland.com> References: <40637DBB36AF3941B243A286A432CA0B0F9C8574@HXMB12.pnet.ch> <20140819143724.GA19292@linux-w520.guibland.com> <40637DBB36AF3941B243A286A432CA0B0F9D0C48@HXMB12.pnet.ch> <20140819174901.GA20663@linux-w520.guibland.com> Message-ID: <40637DBB36AF3941B243A286A432CA0B0F9D0EA5@HXMB12.pnet.ch> Hi Antoine Thank you very much for your information and steps how to change the HW type in the clients xml profile >> So it is not Qemu, which has changed its behavior, but dmidecode which now does obviously show the real qemu standard value. >> And because dmidecode has changed, facter does also show the "real" qemu default value. >i don't know how your machine type ihas changed, but >but something has changed the value in the libvirt xml configuration, >so the VM guest machine type reported by dmidecode was altered. No, I disagree: NOTHING and NOBODY has changed the libvirt xml configuration so far on Dom0. See here on SLES11-SP3 ================ h05bug:/etc/libvirt/qemu # cat h039uc.xml h039uc 55647a92-2d6d-a557-99f1-20acfe3db7aa 2097152 2097152 2 hvm =================== And on the KVM client under SLES11-SP3 the client reports with dmidecode h039uc:~ # dmidecode | grep Product Product Name: Bochs h039uc:~ # uname -a Linux h039uc 3.0.101-0.31-default #1 SMP Wed Jun 4 08:59:53 UTC 2014 (87c5279) x86_64 x86_64 x86_64 GNU/Linux h039uc:~ # h039uc:~ # lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 2 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 6 Model: 2 Stepping: 3 CPU MHz: 2099.998 BogoMIPS: 4199.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 64K L1i cache: 64K L2 cache: 512K NUMA node0 CPU(s): 0,1 h039uc:~ # ======================== Now here on SLES12 RC1 h05cni:/etc/libvirt/qemu # uname -a Linux h05cni 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux h05cni:/etc/libvirt/qemu # cat h05cnh.xml h05cnh 9b291448-befe-2d7e-2eda-20bde0715fae 4194304 4194304 4 hvm =================== And on the KVM client under SLES12-RC1 the client reports with dmidecode h05cnh:~ # dmidecode | grep Product Product Name: Standard PC (i440FX + PIIX, 1996) h05cnh:~ # lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 6 Model name: QEMU Virtual CPU version 2.0.0 Stepping: 3 CPU MHz: 2199.998 BogoMIPS: 4399.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K NUMA node0 CPU(s): 0-3 h05cnh:~ # ================================== So you can see that on the client the output of dmidecode has changed. There is no change on Dom0 in the libvirt xml so far. All set up using vm-install, the graphical utility. Best regards Urs Frey????????????????????????????????????????????? Post CH AG Informationstechnologie IT Betrieb Webergutstrasse 12 3030 Bern (Zollikofen) Telefon : ++41 (0)58 338 58 70 FAX???? : ++41 (0)58 667 30 07 E-Mail:?? urs.frey at post.ch -----Urspr?ngliche Nachricht----- Von: Antoine Ginies [mailto:aginies at suse.com] Gesendet: Tuesday, August 19, 2014 7:49 PM An: Frey Urs, IT222 Cc: sles-beta at lists.suse.com Betreff: Re: AW: [sles-beta] SLES12 RC1 KVM client Productname urs.frey at post.ch: > >Antoine Ginies > Hi Antoine > > Thank you very much for your answer, I have read with very high interest. > > >By default machine type for Qemu is pc-i440fx: > >under SLES12 host: > >linux-x61s:~ # qemu-system-x86_64 -M ? | grep default > >pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > > > >and under SLE11SP3 host: > >pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > > >Machine type can be changed. So it's not a bug if the machine type was > >specified to launch the VM guest. > > ON my KVM Dom0 (Hypervisor) I can see what the supported values with qemu-system-x86_64 are. > What I miss is some KVM related value in the list below. > > SLES12-RC1 > ========== > h05cni:~ # uname -a > Linux h05cni 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux > h05cni:~ # > h05cni:~ # qemu-system-x86_64 -M ? > h05cni:~ # qemu-system-x86_64 -machine help > Supported machines are: > pc-0.13 Standard PC (i440FX + PIIX, 1996) > pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2.0) > pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > pc-1.0 Standard PC (i440FX + PIIX, 1996) > pc-q35-1.7 Standard PC (Q35 + ICH9, 2009) > pc-1.1 Standard PC (i440FX + PIIX, 1996) > q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2.0) > pc-q35-2.0 Standard PC (Q35 + ICH9, 2009) > pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) > pc-i440fx-1.5 Standard PC (i440FX + PIIX, 1996) > pc-0.14 Standard PC (i440FX + PIIX, 1996) > pc-0.15 Standard PC (i440FX + PIIX, 1996) > xenfv Xen Fully-virtualized PC > pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) > isapc ISA-only PC > pc-0.10 Standard PC (i440FX + PIIX, 1996) > pc-1.2 Standard PC (i440FX + PIIX, 1996) > pc-0.11 Standard PC (i440FX + PIIX, 1996) > pc-i440fx-1.7 Standard PC (i440FX + PIIX, 1996) > pc-i440fx-1.6 Standard PC (i440FX + PIIX, 1996) > none empty machine > xenpv Xen Para-virtualized PC > pc-q35-1.5 Standard PC (Q35 + ICH9, 2009) > pc-q35-1.6 Standard PC (Q35 + ICH9, 2009) > pc-0.12 Standard PC (i440FX + PIIX, 1996) > pc-1.3 Standard PC (i440FX + PIIX, 1996) > h05cni:~ # > > SLES11-SP3 > =========== > h062rm:~ # uname -a > Linux h062rm 3.0.101-0.31-default #1 SMP Wed Jun 4 08:59:53 UTC 2014 (87c5279) x86_64 x86_64 x86_64 GNU/Linux > h062rm:~ # qemu-kvm -machine help > Supported machines are: > q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-1.4) > pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) > pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-1.4) > pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > pc-1.3 Standard PC > pc-1.2 Standard PC > pc-1.1 Standard PC > pc-1.0 Standard PC > pc-0.15 Standard PC > pc-0.14 Standard PC > pc-0.13 Standard PC > pc-0.12 Standard PC > pc-0.11 Standard PC, qemu 0.11 > pc-0.10 Standard PC, qemu 0.10 > isapc ISA-only PC > none empty machine > h062rm:~ # > > -machine [type=]name[,prop[=value][,...]] > selects emulated machine ('-machine help' for list) > property accel=accel1[:accel2[:...]] selects accelerator > supported accelerators are kvm, xen, tcg (default: tcg) > kernel_irqchip=on|off controls accelerated irqchip support > kvm_shadow_mem=size of KVM shadow MMU > dump-guest-core=on|off include guest memory in a core dump (default=on) > mem-merge=on|off controls memory merge support (default: on) > > Maybe you misunderstood: > I did set up this KVM client with the graphical virt-install and also using vm-install. > But there setting of HW type I could not find yet > So obviously the graphical tools do detect by their own setting the default value. Yes the graphical tool should use the default value, and it should be "pc-i440fx" by default. > What I noted is that obviously dmidecode does show a different result from SLES11-SP3 & SLES12 Beta9 towards SLES12 RC1 I didn't done any test under BETA9. Lets focus on SLE11SP3 and SLE12RC1 by default (with any changes). Default Machine type for VM guest must be "pc-i440fx" if you have done the installation using "virt-install" or "vm-install" tool. Once the installation is done, the libvirt configuration will be in: /etc/libvirt/qemu if you grep "machine" in the xml file: linux-x61s:/etc/libvirt/qemu # grep machine sles11.xml hvm of course you can change this value using virsh (VM guest should be off): ******************** 1) virsh -c qemu:///system 2) virsh # list --inactive Id Name State ---------------------------------------------------- - sles11 shut off 3) virsh # edit sles11 ******************** 4) change the value between tag, in our example we change the value to "pc-q35-2.0": hvm ******************** Domain sles11 XML configuration not changed. 5) virsh # start sles11 ******************** log on the guest and check machine/Product: ******************** 6) linux-ed3a:~ # dmidecode | grep Prod Product Name: Standard PC (Q35 + ICH9, 2009) ******************** > So it is not Qemu, which has changed its behavior, but dmidecode which now does obviously show the real qemu standard value. > And because dmidecode has changed, facter does also show the "real" qemu default value. i don't know how your machine type ihas changed, but but something has changed the value in the libvirt xml configuration, so the VM guest machine type reported by dmidecode was altered. regards. > -----Urspr?ngliche Nachricht----- > Von: Antoine Ginies [mailto:aginies at suse.com] > Gesendet: Tuesday, August 19, 2014 4:37 PM > An: Frey Urs, IT222 > Cc: sles-beta at lists.suse.com > Betreff: Re: [sles-beta] SLES12 RC1 KVM client Productname > > urs.frey at post.ch: > > Hi > > Hello, > > > > > When set up a SLES12 RC1 as KVM client, the productname looks quite strange now > > > > h05cnh:~ # facter | grep product > > productname => Standard PC (i440FX + PIIX, 1996) > > h05cnh:~ # uname -a > > Linux h05cnh 3.12.25-2-default #1 SMP Mon Jul 28 12:18:48 UTC 2014 (1b84426) x86_64 x86_64 x86_64 GNU/Linux > > h05cnh:~ # facter | grep product > > productname => Standard PC (i440FX + PIIX, 1996) > > h05cnh:~ # dmidecode | grep Product > > Product Name: Standard PC (i440FX + PIIX, 1996) > > h05cnh:~ # facter | grep virtual > > is_virtual => true > > virtual => kvm > > h05cnh:~ # > > By default machine type for Qemu is pc-i440fx: > under SLES12 host: > linux-x61s:~ # qemu-system-x86_64 -M ? | grep default > pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996) (default) > > and under SLE11SP3 host: > pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) > > > > Until Beta9 the output was different and more readable, as it was under SLES11-SP3 > > h039ua:~ # uname -a > > Linux h039ua 3.12.22-2-default #1 SMP Fri Jun 13 13:46:18 UTC 2014 (ee1c2a2) x86_64 x86_64 x86_64 GNU/Linux > > h039ua:~ # facter | grep product > > productname => Bochs > > h039ua:~ # dmidecode | grep Product > > Product Name: Bochs > > h039ua:~ # facter | grep virtual > > is_virtual => true > > virtual => kvm > > h039ua:~ # > > > > Could this new unusual product name coming with SLES12-RC1 be considered as I bug? > > Machine type can be changed. So it's not a bug if the machine type was > specified to launch the VM guest. > > > regards. > > > I was searching also in SUSEConnect to see how KVM gets recognized and will be treated to be registered as virtual client > > > > I'll appreciate to get more information about this new product naming of a KVM client > > > > And also of course to get a hint where about to search in the code of suse connect to get an idea about hovKVM gets treated please > > Thank you very much > > > > Best regards > > > > Urs Frey > > Post CH AG > > Informationstechnologie > > IT Betrieb > > Webergutstrasse 12 > > 3030 Bern (Zollikofen) > > Telefon : ++41 (0)58 338 58 70 > > FAX : ++41 (0)58 667 30 07 > > E-Mail: urs.frey at post.ch > > > > > > > > > _______________________________________________ > > sles-beta mailing list > > sles-beta at lists.suse.com > > http://lists.suse.com/mailman/listinfo/sles-beta > > > -- > Antoine Ginies > Project Manager > SUSE France -- Antoine Ginies Project Manager SUSE France From lmb at suse.com Wed Aug 20 04:15:16 2014 From: lmb at suse.com (Lars Marowsky-Bree) Date: Wed, 20 Aug 2014 12:15:16 +0200 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: References: Message-ID: <20140820101516.GA4328@suse.de> On 2014-08-19T11:08:36, Andy Ryan wrote: > I am running pacemaker on these systems and that seems to be an issue. I > stopped pacemaker, turned on ip_forward, and restarted pacemaker and the > system did not reboot. It looks like it is the stonith device? The > stonith device is a shared disk partition, so it should not be an issue > (since the disc is on its own FC HBA). But I checked and one of the other > nodes does indeed reset the node that I tried to enable ip_forwarding on. Hard to comment without knowing your network and cluster topology, but it's conceivable that you're changing something that breaks the cluster communication. I doubt it's related to the STONITH device, but I'd expect you to get a bunch of corosync messages from all the nodes. Regards, Lars -- Architect Storage/HA SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 21284 (AG N?rnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde From behlert at suse.com Wed Aug 20 05:11:26 2014 From: behlert at suse.com (Stefan Behlert) Date: Wed, 20 Aug 2014 13:11:26 +0200 Subject: [sles-beta] RC2 Dates ? In-Reply-To: <46AC8C81C10B8C48820201DF2AE1D76D67ADE30A@hqmbx6.eur.ad.sag> References: <46AC8C81C10B8C48820201DF2AE1D76D67ADE30A@hqmbx6.eur.ad.sag> Message-ID: <20140820111126.GN14187@suse.de> Hi all, On Aug 19, 14 10:50:24 +0000, Waite, Dick wrote: > Grand Day SLES12 List, > > On one of the older schedules we were going to have a SLES 12 refresh on > Friday 22nd. Is that still a maybe date or should we plan for a relaxing > weekend with the Bar-B-Q? Schools will be starting up over the next > couple of weeks. Then people who are not tied to school holidays can exit > stage left, and that?s my exit. Well, you can certainly have a relaxed Bar-B-Q at the week end if the weather plays along... ;) In case you need an escape from that, and instead of watching real bugs crawling over the lawn want to discover some digital ones, yes, that will be possible. At least as it seems at the moment. After having found some issues earlier this week that we wanted to be fixed, we are now giving images to QA for validation tests. Once those are done and have been passsed, we plan to release the images as RC2. I currently expect them to be visible at the usual mirrors and locations on Friday, but this is just like the weather forecast - if the QA thunderstorm hits us, all bets are off ;) ciao, Stefan > > __R > -- Stefan Behlert, SUSE LINUX Project Manager Enterprise Server Maxfeldstr. 5, D-90409 Nuernberg, Germany Phone +49-911-74053-173 SUSE LINUX Products GmbH, Nuernberg; GF: Jeff Hawn, Jennifer Guild, Felix Imendoerffer, HRB 16746 (AG Nuernberg) From filipe.lacerda at lusolabs.com Wed Aug 20 05:28:36 2014 From: filipe.lacerda at lusolabs.com (Filipe Lacerda) Date: Wed, 20 Aug 2014 12:28:36 +0100 Subject: [sles-beta] Enabling ip forwarding causes server to reboot In-Reply-To: <20140820101516.GA4328@suse.de> References: <20140820101516.GA4328@suse.de> Message-ID: <34e834d3-a527-4f46-9381-e7ea88d53f4d@email.android.com> Hi. Are you using unicast or multicast? I had a problem using multicast recently in SLES 11 and happend to find out that the network switch had to be configured to accept multicast as some more config over the infrastructure. I was using Cisco UCS blades and networking equipment. Changed to unicast and no more spontaneous rebooting either after fencing or during runtime. One question I have. Do you set a Fencing Topology? or just use one fencing device? Thanks, Filipe Lacerda On 20 August 2014 11:15:16 WEST, Lars Marowsky-Bree wrote: >On 2014-08-19T11:08:36, Andy Ryan wrote: > >> I am running pacemaker on these systems and that seems to be an >issue. I >> stopped pacemaker, turned on ip_forward, and restarted pacemaker and >the >> system did not reboot. It looks like it is the stonith device? The >> stonith device is a shared disk partition, so it should not be an >issue >> (since the disc is on its own FC HBA). But I checked and one of the >other >> nodes does indeed reset the node that I tried to enable ip_forwarding >on. > >Hard to comment without knowing your network and cluster topology, but >it's conceivable that you're changing something that breaks the cluster >communication. > >I doubt it's related to the STONITH device, but I'd expect you to get a >bunch of corosync messages from all the nodes. > > >Regards, > Lars > >-- >Architect Storage/HA >SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix >Imend?rffer, HRB 21284 (AG N?rnberg) >"Experience is the name everyone gives to their mistakes." -- Oscar >Wilde > >_______________________________________________ >sles-beta mailing list >sles-beta at lists.suse.com >http://lists.suse.com/mailman/listinfo/sles-beta -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dick.Waite at softwareag.com Wed Aug 20 07:03:56 2014 From: Dick.Waite at softwareag.com (Waite, Dick) Date: Wed, 20 Aug 2014 13:03:56 +0000 Subject: [sles-beta] RC2 Dates ? In-Reply-To: <20140820111126.GN14187@suse.de> References: <46AC8C81C10B8C48820201DF2AE1D76D67ADE30A@hqmbx6.eur.ad.sag>, <20140820111126.GN14187@suse.de> Message-ID: <46AC8C81C10B8C48820201DF2AE1D76D67ADECA0@hqmbx6.eur.ad.sag> Many Thanks for the good words Stefan. Yes living in Europe one always has to have the Plan "B" ready, hope Mr. Salmond has his? So get the bits and bobs for the Bar-B-Q ready but keep a weather eye open on Friday afternoon for an .iso or two coming over the wires. __R ________________________________________ From: sles-beta-bounces at lists.suse.com [sles-beta-bounces at lists.suse.com] on behalf of Stefan Behlert [behlert at suse.com] Sent: 20 August 2014 13:11 To: sles-beta at lists.suse.com Subject: Re: [sles-beta] RC2 Dates ? Hi all, On Aug 19, 14 10:50:24 +0000, Waite, Dick wrote: > Grand Day SLES12 List, > > On one of the older schedules we were going to have a SLES 12 refresh on > Friday 22nd. Is that still a maybe date or should we plan for a relaxing > weekend with the Bar-B-Q? Schools will be starting up over the next > couple of weeks. Then people who are not tied to school holidays can exit > stage left, and that?s my exit. Well, you can certainly have a relaxed Bar-B-Q at the week end if the weather plays along... ;) In case you need an escape from that, and instead of watching real bugs crawling over the lawn want to discover some digital ones, yes, that will be possible. At least as it seems at the moment. After having found some issues earlier this week that we wanted to be fixed, we are now giving images to QA for validation tests. Once those are done and have been passsed, we plan to release the images as RC2. I currently expect them to be visible at the usual mirrors and locations on Friday, but this is just like the weather forecast - if the QA thunderstorm hits us, all bets are off ;) ciao, Stefan > > __R > -- Stefan Behlert, SUSE LINUX Project Manager Enterprise Server Maxfeldstr. 5, D-90409 Nuernberg, Germany Phone +49-911-74053-173 SUSE LINUX Products GmbH, Nuernberg; GF: Jeff Hawn, Jennifer Guild, Felix Imendoerffer, HRB 16746 (AG Nuernberg) _______________________________________________ sles-beta mailing list sles-beta at lists.suse.com http://lists.suse.com/mailman/listinfo/sles-beta Software AG ? Sitz/Registered office: Uhlandstra?e 12, 64297 Darmstadt, Germany ? Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com From behlert at suse.com Fri Aug 22 11:56:47 2014 From: behlert at suse.com (Stefan Behlert) Date: Fri, 22 Aug 2014 19:56:47 +0200 Subject: [sles-beta] [ANNOUNCE] SLES 12 RC2 is available Message-ID: <20140822175647.GB17916@suse.de> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! SUSE CONFIDENTIAL !! SUSE CONFIDENTIAL !! SUSE CONFIDENTIAL !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Dear Beta participants, we are happy to announce the second Release Candidate of SUSE Linux Enterprise Server 12. ISO images are now available for download now. Please go to http://www.novell.com/beta and select "View my beta page". Here you should see all Beta's you are part of. Note that the directory contains the images for the SDK as well as for the SLED Extension. We offer 3 DVD ISOs: DVD1 contains the binaries, the second DVD the sources and the third DVD the debuginfo packages. The final product will not contain the debuginfo packages on the media. For installation purposes you just need Media 1 for your architecture. Please verify the md5sum of the ISO using the MD5SUMS file, which can be found in the same directory on the download servers. Known issues (selection): 892675 - wrong dependency in libsoftokn3-hmac-32bit You need to ignore dependencies when installing, and install libsoftokn3-32bit in addition. An update will be released in the update channel. 892358 - grub2-once does not work properly with btrfs Due to an interaction issue between btrfs and grub, hibernating machines with BtrFS results in the need to re-install grub2. Run "/usr/bin/grub2-editenv /boot/grub2/grubenv unset next_entry" after resuming. IMPORTANT CHANGES: * LVM2 proposal is no nlonger suggesting a separate /boot partition If you have msdos partition tables (and the disk was previously used, not new) this can be problematic. We would appreciate testing done and (positive and negative) feedback from that testing. * ksh was moved to the legacy module, and replaced with mksh. Note that if you register the system, you will receive updates. With this snapshot we have reached several milestones: o Milestone: Only blocker bugfixes allowed. o Milestone: All translations available. o Official ISV/IHV (re-)certification/validation starts. o Official partner acceptance continues. o Only showstopper and security bugfixes get integrated. o Run final stress, certification, performance, update and regression tests. o Fix blocker bugs. For the next Release we are targeting these actions and milestones: o Milestone: All blocker bugs resolved. o Milestone: Last documentation and translation fixes integrated. o Milestone: Last security and showstopper bugfixes integrated. o Milestone: All blocker bugs resolved. o Official ISV/IHV (re-)certification/validation continues. o Official partner acceptance continues. o Only showstopper and security bugfixes get integrated. o Apply final release notes entries o Further security updates will be handled as separate packages. Be aware that we have provided snapshots of three 'modules' at the download page with this milestone: Module "Legacy": The Legacy Module supports your migration from SUSE Linux Enterprise 10 and 11 and other systems to SUSE Linux Enterprise 12, by providing packages which are discontinued on SUSE Linux Enterprise Server, but which you may rely on, such as: CyrusIMAP, BSD like ftp client, sendmail, IBM Java6. Access to the Legacy Module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself; please check the Release Notes for further details. Module "Web & Scripting": The SUSE Linux Enterprise Web and Scripting Module delivers a comprehensive suite of scripting languages, frameworks, and related tools helping developers and systems administrators accelerate the creation of stable, modern web applications. Access to the Web and Scripting Module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself; please check the Release Notes for further details. Module "Advanced System Management": This Module gives you a sneak-peak into our upcoming systems management toolbox which allows you to inspect systems remotely, store their system description and create new systems to deploy them in datacenters and clouds. The toolbox is still in active development and will get regular updates. Note that these are provided "as is" and not all packages that will go into them are already contained in them. We recommend to use them through the repositories available online after registration. Thanks in advance for all your testing Your SUSE Linux Enterprise Team -- Stefan Behlert, SUSE LINUX Project Manager Enterprise Server Maxfeldstr. 5, D-90409 Nuernberg, Germany Phone +49-911-74053-173 SUSE LINUX Products GmbH, Nuernberg; GF: Jeff Hawn, Jennifer Guild, Felix Imendoerffer, HRB 16746 (AG Nuernberg)