All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs resize partition problem
@ 2013-11-16 20:37 Dejan Ribič
  2013-11-16 22:19 ` Duncan
  0 siblings, 1 reply; 4+ messages in thread
From: Dejan Ribič @ 2013-11-16 20:37 UTC (permalink / raw)
  To: linux-btrfs

Hello,

Originaly I had two seperate ext4 partitions for rootand home, thenI 
recently converted my root ext4 partition to btrfs primarily because of 
snapshots, I also created a subvolume for /var/cache/pacman/pkg because 
I didn't want the packages in snapshots and that worked out great, I 
have been running this setup without problems for two weeks now, but it 
got me thinking why do I even have a seperate partition for home, so I 
created a backup of my /home/stayerc directory and deleted the ext4 
partition, but when I tried to resize btrfs partition using latest 
gparted live cd, an error happened(details are on the bottom of email). 
I would really apreciated any help.

Cheers,

Dejan

PS: Please Cc to me I am not subscribed

save-details from gparted:
---------------------------------------------
GParted 0.16.1 --enable-libparted-dmraid
Libparted 2.3
Move /dev/sda8 to the left and grow it from 20.95 GiB to 41.90 GiB 
  00:01:19    ( ERROR )
calibrate /dev/sda8  00:00:00    ( SUCCESS )
path: /dev/sda8
start: 1190326272
end: 1234257919
size: 43931648 (20.95 GiB)
check file system on /dev/sda8 for errors and (if possible) fix them 
  00:01:19    ( ERROR )
btrfsck /dev/sda8
Checking filesystem on /dev/sda8
UUID: 5f32176b-fa13-4af2-a5d9-ffbc558bdb13
free space inode generation (0) did not match free space cache 
generation (1122)
free space inode generation (0) did not match free space cache 
generation (1122)
free space inode generation (0) did not match free space cache 
generation (1122)
free space inode generation (0) did not match free space cache 
generation (1122)
free space inode generation (0) did not match free space cache 
generation (1109)
found 3430820721 bytes used err is 19
total csum bytes: 7677820
total tree bytes: 569364480
total fs tree bytes: 529350656
total extent tree bytes: 29700096
btree space waste bytes: 161755833
file data blocks allocated: 38304477184
referenced 16981073920
Btrfs v0.20-rc1
checking extents
checking free space cache
checking fs roots
checking csums
There are no extents for csum range 0-69632
Csum exists for 0-69632 but there is no extent record
There are no extents for csum range 1121083392-1121087488
Csum exists for 1121083392-1121087488 but there is no extent record
There are no extents for csum range 1121099776-1121103872
Csum exists for 1121099776-1127346176 but there is no extent record
There are no extents for csum range 1127489536-1127493632
Csum exists for 1127489536-1127538688 but there is no extent record
There are no extents for csum range 1127550976-1127555072
Csum exists for 1127550976-1404313600 but there is no extent record
There are no extents for csum range 1404358656-1404362752
Csum exists for 1404358656-1404362752 but there is no extent record
There are no extents for csum range 1404375040-1404379136
Csum exists for 1404375040-1592889344 but there is no extent record
There are no extents for csum range 1954291712-1954295808
Csum exists for 1954291712-1970950144 but there is no extent record
There are no extents for csum range 1971093504-1971097600
Csum exists for 1971093504-2020798464 but there is no extent record
There are no extents for csum range 2021244928-2021249024
Csum exists for 2021244928-2021253120 but there is no extent record
There are no extents for csum range 2021699584-2021703680
Csum exists for 2021699584-2022080512 but there is no extent record
There are no extents for csum range 2022526976-2022531072
Csum exists for 2022526976-2277900288 but there is no extent record
There are no extents for csum range 2277912576-2277916672
Csum exists for 2277912576-2277916672 but there is no extent record
There are no extents for csum range 2277928960-2277933056
Csum exists for 2277928960-2277933056 but there is no extent record
There are no extents for csum range 2277945344-2277949440
Csum exists for 2277945344-2277949440 but there is no extent record
There are no extents for csum range 2277961728-2277965824
Csum exists for 2277961728-2277969920 but there is no extent record
There are no extents for csum range 2277982208-2277986304
Csum exists for 2277982208-2277986304 but there is no extent record
There are no extents for csum range 2277998592-2278002688
Csum exists for 2277998592-2278014976 but there is no extent record
There are no extents for csum range 2278027264-2278031360
Csum exists for 2278027264-2392010752 but there is no extent record
========================================


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: btrfs resize partition problem
  2013-11-16 20:37 btrfs resize partition problem Dejan Ribič
@ 2013-11-16 22:19 ` Duncan
  2013-11-17 12:05   ` Russell Coker
  0 siblings, 1 reply; 4+ messages in thread
From: Duncan @ 2013-11-16 22:19 UTC (permalink / raw)
  To: linux-btrfs

Dejan Ribič posted on Sat, 16 Nov 2013 21:37:09 +0100 as excerpted:

> but it got me thinking why do I even have a seperate partition for home

[List plus direct mail reply, as requested.  Please remind me again with 
followups and don't post to both me and the list as I do follow the list 
and don't need duplicates.]

I've no direct answer to your posted problem, tho I have some 
suggestions.  But, based on your mention of pacman I guess you're on 
arch, and FWIW I'm on gentoo, both considered reasonably "expert" level 
distros, and based on that...

Far be it from me to interfere with another admin's partitioning choices, 
but because the question came up and based both on general 
recommendations and my own at times hard learned experience...

People often use separate partitions because they don't want all their 
data eggs in one basket, and because it makes administration easier for 
some things.  A read-only by default rootfs is far safer in the event of 
a system crash, for instance, and can be quite practical if any data 
that's routinely written is kept on other partitions (like /home) while a 
read-only /home isn't viable for a normal desktop use-case, at least.  
While it's possible to mount a subvolume read-only while another is 
mounted read-write, or to use bind-mounts on part of a filesystem to make 
only part of it read-write, much of the data safety of the read-only side 
disappears if they're on the same overall filesystem, since its the same 
overall filesystem tree exposed to corruption in the case of a crash.  
Keep the filesystems separate, and read-only mounts are relatively 
unlikely to be harmed at all in the event of a crash, generally limiting 
risk to read-write mounted filesystems.

A read-only root (and /usr if it's separately mounted, not so often these 
days and /usr is on rootfs here) is particularly useful, since that's 
normally where all the recovery tools live, along with full usage 
documentation (manpages, etc, NOT typically available in an initr* based 
recovery situaiton), meaning if a working full rootfs is mountable, it's 
far easier to do further recovery from there, and a read-only-by-default 
rootfs makes problem free mounting of that rootfs FAR more likely!

Meanwhile, /home is often kept separate both because it usually needs to 
be mounted writable, and because that makes dealing with user data only, 
generally the most valuable part of a desktop/laptop installation, far 
easier.

Similarly, either all of /var, or bits such as /var/log, /var/cache, /var/
spool, etc, are often managed separately so they can be writable, some of 
them (/var/run) can be tmpfs, etc.  And keeping /var/log in particular on 
its own partition tends to be VERY helpful in a runaway logging event, 
since the full partition is then caught rather sooner and resultant 
damage is confined to logs.  Additionally, logfiles tend to be actively 
open for write in a crash, and keeping an independent /var/log again 
drastically limits the likely damage to /just/ /var/log.

While the case can certainly be debated and a lot of the big name distros 
*ARE* going for a single big btrfs with a bunch of subvolumes these days, 
I expect any admin with a decent bit of hard-earned experience under his 
belt will view such a practice as suspect, likely EXTREMELY suspect.  
"Let the distros do what they want by default, but that's not getting 
anywhere NEAR *MY* systems!!" level suspect!

Certainly that's the case here.  There's a /reason/ I maintain separate 
partitions.  That reason is that doing so has MANY times saved my data!


That goes double for an experimental filesystem under heavy development 
such as btrfs remains ATM.  Certainly, keep solid and tested backups 
applies even more to experimental filesystems such as btrfs than stable 
filesystems such as ext3/4 and reiserfs, and NOT keeping tested backups 
on any data you're putting on an experimental filesystem such as btrfs 
demonstrates by action that you do NOT care about that data, whatever you 
might SAY, but that does NOT mean throwing routine caution to the wind!

And again, btrfs being experimental as it is, a read-only-by-default 
rootfs (or even read-write by default, since it's relatively unlikely to 
have been being actively written at the time of a crash) tends not to get 
the damage constantly written to filesystems such as /home and /var/log 
get, so keeping them on entirely separate filesystems makes even MORE 
sense, as it severely limits the risk placed on the rootfs, making 
recovery of damaged filesystems both shorter and easier since they are 
smaller and there's simply less data and metadata involved /to/ need 
recovery.


OTOH, the big name distros are going subvolumed btrfs, and if it's good 
enough for them...

But it's *STILL* not getting anywhere near *MY* systems!  Let them do 
what they do, I've learned waayyy too many of my lessons the HARD way, 
and I'm *NOT* going to unlearn them just to have to learn them again!

That said, your system, your call.  I'd not /dream/ of taking that right 
away from you. =:^)


Meanwhile, addressing your problem:  Try mounting with the clear_cache 
option as described on the btrfs wiki, under documentation, mount 
options.  Also, the fact that you weren't already aware of that hints 
that you likely weren't aware of the wiki itself, or haven't spent much 
time reading it.  I'd suggest you do so, as there's likely quite a bit 
more information there that you'll find useful:

https://btrfs.wiki.kernel.org

https://btrfs.wiki.kernel.org/index.php/Mount_options

Finally, keep in mind that btrfs does remain experimental at this point, 
under rapid development, and anyone using it is in effect volunteering to 
test btrfs using their data.  I *STRONGLY* recommend a backup and backup 
recovery testing strategy keeping that in mind.  Similarly, keeping 
current on both kernel and btrfs-progs is vital -- you should be on at 
LEAST a 3.11 kernel if not 3.12 by now and likely switching to 3.13 
sometime in the development cycle, as running btrfs on a kernel more than 
two releases old means you're unnecessarily risking your data to known 
patched bugs, as well as making any problem reports less useful.  And 
btrfs-progs should be at LEAST version 0.20-rc1, which is already about a 
year old, and preferably you should be running a recent git build, as 
btrfs-progs development happens in branches and the git master branch 
policy is release quality at all times.

And as a btrfs tester, you really /should/ either subscribe to the list, 
or follow it regularly somewhere like gmane.org (FWIW I use their nntp 
interface here), as that way you know what's going on and may well get a 
heads-up on bugs before they affect you, or at least know better how to 
fix them when they do.  Of course, nobody's forcing you.  But it's your 
data at risk (or at least restore time, since your data should be backed 
up and thus restorable) if you hit a bug that might have been avoided had 
you been following the list and would have thus known about it before you 
hit it.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: btrfs resize partition problem
  2013-11-16 22:19 ` Duncan
@ 2013-11-17 12:05   ` Russell Coker
  2013-11-17 20:05     ` Duncan
  0 siblings, 1 reply; 4+ messages in thread
From: Russell Coker @ 2013-11-17 12:05 UTC (permalink / raw)
  To: linux-btrfs

On Sun, 17 Nov 2013, Duncan <1i5t5.duncan@cox.net> wrote:
> People often use separate partitions because they don't want all their
> data eggs in one basket, and because it makes administration easier for
> some things.  A read-only by default rootfs is far safer in the event of

I have a laptop with two partitions, one is encrypted and has root /home, etc.  
The other isn't encrypted and has things like the latest TED talks I 
downloaded.  The potential problem with this scheme is that if the volume of 
encrypted data starts taking more than the amount of space allocated for it 
then things will become difficult.  But as the unencrypted data tends to grow 
faster that doesn't seem to be likely.

I'm just planning my first BTRFS server which will run in a location other 
than my home.  This is significant because my ability to fix things will be 
limited.  For that server I will use Ext4 for / and BTRFS for everything else.  
Then if something goes wrong there will be a chance that I can at least login 
remotely to fix the BTRFS filesystem.

My home server has / and /home on a SSD and a RAID-1 array of 2*3TB disks 
mounted on /big.  I have had a number of BTRFS related problems with that 
system and BTRFS for / hasn't made it easier to solve them.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: btrfs resize partition problem
  2013-11-17 12:05   ` Russell Coker
@ 2013-11-17 20:05     ` Duncan
  0 siblings, 0 replies; 4+ messages in thread
From: Duncan @ 2013-11-17 20:05 UTC (permalink / raw)
  To: linux-btrfs

Russell Coker posted on Sun, 17 Nov 2013 23:05:04 +1100 as excerpted:

> My home server has / and /home on a SSD and a RAID-1 array of 2*3TB
> disks mounted on /big.  I have had a number of BTRFS related problems
> with that system and BTRFS for / hasn't made it easier to solve them.

Once drive sizes got big enough, the way I solved the how-to-fix-root 
problem here is by keeping two (or three) identically sized root and root-
bak partitions around.  Periodically, when I'm content that the system in 
general is stable-state and working well[1], I'll blow away the backup-
root and snapshot-copy the working root over to it again.  (That's why I 
like having a third copy too, what happens if something goes wrong with 
the working copy at just the moment I've blown away the backup in ordered 
to redo it?)

And with separate partitions for /home, multi-media, packages/update, 
/boot, and /var/log (plus /tmp and /run on tmpfs in memory), the root 
partition, containing basically the entire installed operating system 
including all apps and their system-level configuration and data plus 
installed-package database[2], is /only/ 8 gigs in size, df reporting 25% 
used so there's a nice, comfortable safety margin.  That means I keep (at 
least) three 8-gig root partitions in various places, working-root, 
primary and secondary backup-root.

In addition to functioning as full root backups, each of the three is 
independently bootable by simply changing the root= parameter fed to the 
kernel at boot time, and in fact, my grub2 menu is setup so I can choose 
any of the three direct from it, without even having to drop to the grub 
commandline.  As a result, if root needs a proper fsck or if an after-all 
~arch/testing profile update broke something critical and I can't boot 
the working root, I simply choose the primary or secondary backup root, 
and have a full working system (manpages/documentation, X-desktop and 
browser, even media players!) exactly the same as it was when I took that 
backup.

My actual drive setup is currently two SSDs where I run btrfs partitions 
mostly in raid1 mode, and a larger legacy spinning-rust drive with the 
media partition (and its primary backup) and additional non-btrfs backups 
of the main system, plus another (external) spinning-rust drive with 
further backups.  Since btrfs is still experimental, for the purposes of 
backup policy I worst-case btrfs as totally lost and keep backups as if I 
didn't have the SSDs and btrfs at all.  (Which means in all I actually 
have something like working copy plus six levels of backup for root, the 
working copy and primary backup on btrfs on the ssds that I don't count 
because btrfs is experimental, the fallback working copy and two backups 
on the internal spinning rust reiserfs should the experimental label of 
btrfs live up to its name, and the two external drive last-resort 
backups, tho they're actually rather outdated ATM, but I could fallback 
to them if I had to.)

---
[1] Stable state, working well:  I'm on gentoo, a rolling-release distro, 
running ~arch aka testing, not stable, and in fact I'm often running live-
git pre-release versions of this or that as well, so sometimes I do /not/ 
consider the system in a stable state!

[2] Installed-package database:  Remember, gentoo's rolling-release, so 
keeping track of what's actually installed is important.  One disaster-
recovery experience some years ago taught me the importance of that when 
I had the installed-package database on a partition separate from root 
with all the actual installed-packages it was tracking, and I ended up 
restoring from backups where the installed-package database was from a 
different date and thus out of sync with what was /actually/ installed!  
*NEVER* *AGAIN*!  They're on the same filesystem now, so they stay in 
sync and restore together!

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-11-17 20:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-16 20:37 btrfs resize partition problem Dejan Ribič
2013-11-16 22:19 ` Duncan
2013-11-17 12:05   ` Russell Coker
2013-11-17 20:05     ` Duncan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.