[sles-beta] SLES 12 x86_64 RC1 snapper does not do snapshots as configured

urs.frey at post.ch urs.frey at post.ch
Fri Aug 8 01:37:04 MDT 2014


Hi

In my production environment I have a CMDB and puppet to configure the servers and manage user accounts. Puppet jobs do run hourly.
So in my specific environment the filesystems within btrfs root are not really static. In fact they are undergoing changes whenever a new configuration gets implemented by puppet.

Used case: time servers get migrated. So on all servers the /etc/ntp.conf must be changed.
In order to keep control, not only the puppet logfile is there but also the btrfs snapshot showing exactly in which hour which file, and what content has been changed.
This btrfs double-check is very welcome here.

v03er9:~ # snapper list
Type   | #  | Pre # | Date                     | User | Cleanup  | Description    | Userdata
-------+----+-------+--------------------------+------+----------+----------------+-------------
single | 0  |       |                          | root |          | current        |
pre    | 1  |       | Wed Aug  6 16:39:50 2014 | root | number   | zypp(zypper)   | important=no
post   | 2  | 1     | Wed Aug  6 16:39:51 2014 | root | number   |                | important=no
pre    | 3  |       | Wed Aug  6 16:39:52 2014 | root | number   | zypp(zypper)   | important=no
post   | 4  | 3     | Wed Aug  6 16:39:53 2014 | root | number   |                | important=no
pre    | 5  |       | Wed Aug  6 16:39:54 2014 | root | number   | zypp(zypper)   | important=no
post   | 6  | 5     | Wed Aug  6 16:39:56 2014 | root | number   |                | important=no
pre    | 7  |       | Wed Aug  6 16:40:03 2014 | root | number   | zypp(zypper)   | important=no
post   | 8  | 7     | Wed Aug  6 16:40:05 2014 | root | number   |                | important=no
pre    | 9  |       | Thu Aug  7 17:11:22 2014 | root | number   | yast sw_single |
post   | 10 | 9     | Thu Aug  7 17:13:57 2014 | root | number   |                |
single | 11 |       | Thu Aug  7 18:15:01 2014 | root | timeline | timeline       |
single | 12 |       | Thu Aug  7 19:15:02 2014 | root | timeline | timeline       |
single | 13 |       | Thu Aug  7 20:15:01 2014 | root | timeline | timeline       |
single | 14 |       | Thu Aug  7 21:15:01 2014 | root | timeline | timeline       |
single | 15 |       | Thu Aug  7 22:15:01 2014 | root | timeline | timeline       |
single | 16 |       | Thu Aug  7 23:15:01 2014 | root | timeline | timeline       |
single | 17 |       | Fri Aug  8 00:15:01 2014 | root | timeline | timeline       |
single | 18 |       | Fri Aug  8 01:15:01 2014 | root | timeline | timeline       |
single | 19 |       | Fri Aug  8 02:15:01 2014 | root | timeline | timeline       |
single | 20 |       | Fri Aug  8 03:15:01 2014 | root | timeline | timeline       |
single | 21 |       | Fri Aug  8 04:15:01 2014 | root | timeline | timeline       |
single | 22 |       | Fri Aug  8 05:15:02 2014 | root | timeline | timeline       |
single | 23 |       | Fri Aug  8 06:15:01 2014 | root | timeline | timeline       |
single | 24 |       | Fri Aug  8 07:15:01 2014 | root | timeline | timeline       |
single | 25 |       | Fri Aug  8 08:15:01 2014 | root | timeline | timeline       |
single | 26 |       | Fri Aug  8 09:15:01 2014 | root | timeline | timeline       |
v03er9:~ # snapper status 25..26
c..... /var/lib/puppet/client_data/catalog/v03er9.pnet.ch.json
c..... /var/lib/puppet/state/last_run_summary.yaml
c..... /var/lib/puppet/state/puppet_user_report_date.patchnix
c..... /var/lib/puppet/state/puppet_user_summary.log
c..... /var/lib/puppet/state/puppet_user_summary.yaml_uploaded
c..... /var/lib/puppet/state/state.yaml
c..... /var/post/patchnix_report.v03er9
v03er9:~ #

But again. This is very specific to an environment where an automated configuration of many server is implemented. E.g. by SUSE-manager or similar.


So I agree: everybody has to choose what fits him/her best.

So the decision for the new default TIMELINE_CREATE="no", is OK for me.
I would appreciate, finding such changes in the Changelog for the release.
Thank you

Best regards

Urs Frey                                              
Post CH AG
Informationstechnologie
IT Betrieb 
Webergutstrasse 12 
3030 Bern (Zollikofen) 
Telefon : ++41 (0)58 338 58 70 
FAX     : ++41 (0)58 667 30 07 
E-Mail:   urs.frey at post.ch

-----Ursprüngliche Nachricht-----
Von: Alejandro Bonilla [mailto:abonilla at suse.com] 
Gesendet: Thursday, August 07, 2014 7:47 PM
An: Frey Urs, IT222; kukuk at suse.de
Cc: sles-beta at lists.suse.com
Betreff: Re: [sles-beta] SLES 12 x86_64 RC1 snapper does not do snapshots as configured

Hi Urs,

I don't think managing hourly snapshots can scale very well. I would vote for Hourly being disabled.

Again, you can enable it.

  Original Message  
From: <urs.frey at post.ch>
Sent: Thursday, August 7, 2014 11:58 AM
To: kukuk at suse.de
Cc: sles-beta at lists.suse.com
Subject: Re: [sles-beta] SLES 12 x86_64 RC1 snapper does not do snapshots as configured

Hi
Thanks for clarification. I see you optimized, and on my side I took the previous state as a standard.

>Not for the root partition.

Under default btrfs installation not only root is within the / snapshot. There are many subvolumes undergoing quite
often changes
v03er9:~ # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 20973568 1659992 17410152 9% /
devtmpfs 949948 0 949948 0% /dev
tmpfs 957800 0 957800 0% /dev/shm
tmpfs 957800 9892 947908 2% /run
tmpfs 957800 0 957800 0% /sys/fs/cgroup
/dev/sda3 20973568 1659992 17410152 9% /var/tmp
/dev/sda3 20973568 1659992 17410152 9% /var/spool
/dev/sda3 20973568 1659992 17410152 9% /var/opt
/dev/sda3 20973568 1659992 17410152 9% /var/log
/dev/sda3 20973568 1659992 17410152 9% /var/lib/named
/dev/sda3 20973568 1659992 17410152 9% /var/lib/pgqsl
/dev/sda3 20973568 1659992 17410152 9% /srv
/dev/sda3 20973568 1659992 17410152 9% /var/lib/mailman
/dev/sda3 20973568 1659992 17410152 9% /var/crash
/dev/sda3 20973568 1659992 17410152 9% /usr/local
/dev/sda3 20973568 1659992 17410152 9% /.snapshots
/dev/sda3 20973568 1659992 17410152 9% /opt
/dev/sda3 20973568 1659992 17410152 9% /boot/grub2/x86_64-efi
/dev/sda3 20973568 1659992 17410152 9% /@
/dev/sda3 20973568 1659992 17410152 9% /boot/grub2/i386-pc

>> Without automated snapshots btrfs is not working for me.
>?
>If you make changes to the system with YaST or zypper, you
>should still get a new snapshot. Only cron will not kill your
>filesystem with a huge amount of empty snapshots.

With the hourly snapshots there is an aspect which is very practical:
For the first time one can see what changes from hour to hour on the so called "static" filesystems.
It is a kind of security feature, when feeding the output from snapper detected changes into splunk, as example.
I had it running for weeks now and it grew very moderately because of the well defined default cleanup rules.
So I would encourage everybody to re-enable the hourly snapshots and get a free monitor of what had changed within the
last hours on the server on the most valuable filesystems.
It makes life in a production environment with operators a bit more transparent.

Best regards



Urs Frey 
Post CH AG
Informationstechnologie
IT Betrieb 
Webergutstrasse 12 
3030 Bern (Zollikofen) 
Telefon : ++41 (0)58 338 58 70 
FAX : ++41 (0)58 667 30 07 
E-Mail: urs.frey at post.ch


-----Ursprüngliche Nachricht-----
Von: sles-beta-bounces at lists.suse.com [mailto:sles-beta-bounces at lists.suse.com] Im Auftrag von Thorsten Kukuk
Gesendet: Thursday, August 07, 2014 5:43 PM
An: sles-beta at lists.suse.com
Betreff: Re: [sles-beta] SLES 12 x86_64 RC1 snapper does not do snapshots as configured

On Thu, Aug 07, urs.frey at post.ch wrote:

> Hi
> When having SLES12 x86_64 RC1 installed using btrfs snapper should do hourly snapshots as configured under
/etc/snapper/configs/root, right?

Not for the root partition.

> Why this?

The root partition is normally not changed that often, so
all this empty snapshots will fill up your harddisk and
makes it pretty hard to find the right snapshot later/to
not delete the only important snapshot during cleanup.
We got a lot of complains about this, thus we changed the
default.

> There is nothing in the Changelog-Beta10-RC1.
> 
> Without automated snapshots btrfs is not working for me.

?
If you make changes to the system with YaST or zypper, you
should still get a new snapshot. Only cron will not kill your
filesystem with a huge amount of empty snapshots.

Else you can still enable it again, nobody prevents you from
doing so.

Thorsten

-- 
Thorsten Kukuk, Senior Architect SLES & Common CoGF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg)
_______________________________________________
sles-beta mailing list
sles-beta at lists.suse.com
http://lists.suse.com/mailman/listinfo/sles-beta
_______________________________________________
sles-beta mailing list
sles-beta at lists.suse.com
http://lists.suse.com/mailman/listinfo/sles-beta



More information about the sles-beta mailing list