* rmdir() succeeds on an empty subvolume
@ 2021-10-10 13:44 Chris Webb
2021-10-10 18:55 ` Other (perhaps related) subvolume strangeness Chris Webb
2021-10-11 19:20 ` rmdir() succeeds on an empty subvolume Chris Webb
0 siblings, 2 replies; 6+ messages in thread
From: Chris Webb @ 2021-10-10 13:44 UTC (permalink / raw)
To: linux-bcachefs; +Cc: Kent Overstreet
If I create an empty snapshot in a bcachefs filesystem, I can then remove
the snapshot directory causing the filesystem to drop into an emergency
read-only state:
# bcachefs format -q /dev/zram1
initializing new filesystem
going read-write
mounted with opts: (null)
# mkdir -p /tmp/p
# mount -t bcachefs /dev/zram1 /tmp/p
# bcachefs subvolume create /tmp/p/s
# rmdir /tmp/p/s
# touch /tmp/p/test
touch: cannot touch '/tmp/p/test': Read-only file system
# dmesg -t | tail -n 6
bcachefs (zram1): recovering from clean shutdown, journal seq 4
bcachefs (zram1): going read-write
bcachefs (zram1): mounted with opts: (null)
bcachefs (zram1): missing subvolume 2
bcachefs (zram1): emergency read only
bcachefs (zram1): error deleting snapshot keys: -30
Since an unprivileged user can use bcachefs subvolume create in a directory
they own, they too can panic the filesystem by doing this.
Similarly, running rm -r on a directory containing a snapshot, the contents
of the snapshot are successfully cleared out before we run into trouble by
removing the subvolume inode.
I guess it should either succeed and delete the snapshot/subvolume, or it
should fail with EBUSY like rmdir() does on a mountpoint?
Best wishes,
Chris.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Other (perhaps related) subvolume strangeness
2021-10-10 13:44 rmdir() succeeds on an empty subvolume Chris Webb
@ 2021-10-10 18:55 ` Chris Webb
2021-10-11 15:44 ` Kent Overstreet
2021-10-11 19:20 ` rmdir() succeeds on an empty subvolume Chris Webb
1 sibling, 1 reply; 6+ messages in thread
From: Chris Webb @ 2021-10-10 18:55 UTC (permalink / raw)
To: linux-bcachefs; +Cc: Kent Overstreet
A couple of (perhaps related) issues I noticed...
bcachefs subvolume delete only works on snapshot subvolumes:
# bcachefs subvolume create /mnt/s
# bcachefs subvolume snapshot /mnt/s /mnt/t
# bcachefs subvolume delete /mnt/s
BCH_IOCTL_SUBVOLUME_DESTROY ioctl error: No such file or directory
# bcachefs subvolume delete /mnt/t
#
Following through, bch2_ioctl_subvolume_destroy() calls __bch2_unlink() and
thus bch2_subvolume_delete() with deleting_snapshot == 1, so
0 <= deleting_snapshot != BCH_SUBVOLUME_SNAP(subvol.v)
and it therefore returns -ENOENT. (Just calling __bch2_unlink() with -1
instead of 1 isn't a sufficient fix on its own, though, as this will fail to
delete non-empty snapshot subvolumes.)
After deleting a snapshot, creating a new one also uncovers a bug:
# bcachefs subvolume snapshot /mnt/s
# bcachefs subvolume delete /mnt/s
# ls /mnt
lost+found
# bcachefs subvolume snapshot /mnt/s
BCH_IOCTL_SUBVOLUME_CREATE ioctl error: File exists
If instead I try to create a new snapshot with a different name:
# bcachefs subvolume snapshot /mnt/
[hangs]
^C
# dmesg -t
bcachefs (zram1): recovering from clean shutdown, journal seq 4
bcachefs (zram1): going read-write
bcachefs (zram1): mounted with opts: (null)
------------[ cut here ]------------
kernel BUG at fs/bcachefs/btree_key_cache.c:548!
invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
CPU: 6 PID: 1397 Comm: bcachefs Tainted: G W 5.13.0+ #5
Hardware name: To Be Filled By O.E.M. 4X4 BOX/4X4-4000 Series, BIOS P1.30 11/27/2020
RIP: 0010:bch2_btree_key_cache_verify_clean+0x33/0x40
Code: 8b 45 10 48 89 04 24 48 8b 45 18 48 89 44 24 08 8b 45 20 89 44 24 10 48 8b 3f e8 08 e6 ff ff 48 83 c4 18 48 85 c0 75 02 c9 c3 <0f> 0b 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 57 49 89 ff
RSP: 0018:ffffa5680307ba28 EFLAGS: 00010286
RAX: ffff9b84459d92c0 RBX: ffffa5680307bc38 RCX: ffffffffffffff80
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff9b844108d580
RBP: ffffa5680307ba28 R08: ffff9b8445946049 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000000 R12: ffff9b844ef20000
R13: ffffffff8ea60340 R14: ffff9b844fb24a40 R15: ffff9b844ef20000
FS: 00007f0776dd4380(0000) GS:ffff9b8b2f780000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055efddee1a60 CR3: 000000010d38a000 CR4: 0000000000350ee0
Call Trace:
__bch2_trans_commit+0xea/0x710
__bch2_create+0x30e/0x640
? generic_permission+0x25/0x200
? __bch2_create+0x148/0x640
bch2_fs_file_ioctl+0x718/0x8a0
? bch2_fs_file_ioctl+0x718/0x8a0
__x64_sys_ioctl+0x3e3/0x930
? __x64_sys_ioctl+0x3e3/0x930
? do_sys_openat2+0x82/0x150
do_syscall_64+0x40/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f0776fb46fd
Code: 03 44 24 10 83 c1 08 89 0c 24 eb 0e 48 8b 44 24 08 48 8d 48 08 48 89 4c 24 08 48 8b 10 48 63 ff 48 63 f6 b8 10 00 00 00 0f 05 <9b> 48 63 f8 e8 ba 73 fe ff 48 81 c4 d8 00 00 00 c3 0f be 05 b5 1b
RSP: 002b:00007fff375a84f0 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fff375a9e16 RCX: 00007f0776fb46fd
RDX: 00007fff375a85f0 RSI: 000000004020bc10 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007f0776f89500 R09: 0000000000000000
R10: 00007fff375a8510 R11: 0000000000000206 R12: 000055efddfdbc50
R13: 0000000000000001 R14: 0000000000000000 R15: 000055efdde79012
Modules linked in:
---[ end trace 2ef23b87f042ceae ]---
RIP: 0010:bch2_btree_key_cache_verify_clean+0x33/0x40
Code: 8b 45 10 48 89 04 24 48 8b 45 18 48 89 44 24 08 8b 45 20 89 44 24 10 48 8b 3f e8 08 e6 ff ff 48 83 c4 18 48 85 c0 75 02 c9 c3 <0f> 0b 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 57 49 89 ff
RSP: 0018:ffffa5680307ba28 EFLAGS: 00010286
RAX: ffff9b84459d92c0 RBX: ffffa5680307bc38 RCX: ffffffffffffff80
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff9b844108d580
RBP: ffffa5680307ba28 R08: ffff9b8445946049 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000000 R12: ffff9b844ef20000
R13: ffffffff8ea60340 R14: ffff9b844fb24a40 R15: ffff9b844ef20000
FS: 00007f0776dd4380(0000) GS:ffff9b8b2f780000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055efddee1a60 CR3: 000000010d38a000 CR4: 0000000000350ee0
Best wishes,
Chris.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Other (perhaps related) subvolume strangeness
2021-10-10 18:55 ` Other (perhaps related) subvolume strangeness Chris Webb
@ 2021-10-11 15:44 ` Kent Overstreet
2021-10-11 19:10 ` Chris Webb
0 siblings, 1 reply; 6+ messages in thread
From: Kent Overstreet @ 2021-10-11 15:44 UTC (permalink / raw)
To: Chris Webb; +Cc: linux-bcachefs
On Sun, Oct 10, 2021 at 07:55:50PM +0100, Chris Webb wrote:
> A couple of (perhaps related) issues I noticed...
>
> bcachefs subvolume delete only works on snapshot subvolumes:
>
> # bcachefs subvolume create /mnt/s
> # bcachefs subvolume snapshot /mnt/s /mnt/t
> # bcachefs subvolume delete /mnt/s
> BCH_IOCTL_SUBVOLUME_DESTROY ioctl error: No such file or directory
> # bcachefs subvolume delete /mnt/t
> #
For the moment at least this is intended behaviour, since a regular subvolume
can be removed with rm -rf. I've been meaning to check how btrfs does it, and
match their behaviour wherever reasonable - do you know offhand what they do?
> After deleting a snapshot, creating a new one also uncovers a bug:
>
> # bcachefs subvolume snapshot /mnt/s
> # bcachefs subvolume delete /mnt/s
> # ls /mnt
> lost+found
> # bcachefs subvolume snapshot /mnt/s
> BCH_IOCTL_SUBVOLUME_CREATE ioctl error: File exists
Wrote a test that reproduces it, I'll look further - thanks
>
> If instead I try to create a new snapshot with a different name:
>
> # bcachefs subvolume snapshot /mnt/
> [hangs]
> ^C
> # dmesg -t
> bcachefs (zram1): recovering from clean shutdown, journal seq 4
> bcachefs (zram1): going read-write
> bcachefs (zram1): mounted with opts: (null)
> ------------[ cut here ]------------
> kernel BUG at fs/bcachefs/btree_key_cache.c:548!
This one I don't seem to be able to reproduce.
Have you looked at ktest[1]? We've got some subvolume & snapshot tests in
there, if you could add a few test cases for the things you're finding that'd be
really helpful (and it's a really slick virtual machine testing environment,
too)
[1] https://github.com/koverstreet/ktest/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Other (perhaps related) subvolume strangeness
2021-10-11 15:44 ` Kent Overstreet
@ 2021-10-11 19:10 ` Chris Webb
2021-10-11 21:07 ` [PATCH] [ktest] Test iterated snapshot create and delete with distinct names Chris Webb
0 siblings, 1 reply; 6+ messages in thread
From: Chris Webb @ 2021-10-11 19:10 UTC (permalink / raw)
To: Kent Overstreet; +Cc: linux-bcachefs
Kent Overstreet <kent.overstreet@gmail.com> writes:
> Have you looked at ktest[1]? We've got some subvolume & snapshot tests in
> there, if you could add a few test cases for the things you're finding
> that'd be really helpful (and it's a really slick virtual machine testing
> environment, too)
Hi Kent. Ah I hadn't tried that, thanks! Looks like an excellent way to make
a more useful report with a test to reliably reproduce it. I've set it up
in a Debian chroot to make the dependencies easier to satisfy, and it's
working nicely.
> On Sun, Oct 10, 2021 at 07:55:50PM +0100, Chris Webb wrote:
>
> > After deleting a snapshot, creating a new one also uncovers a bug:
>
> Wrote a test that reproduces it, I'll look further - thanks
This might be the same issue as test_subvol_snapshot_reuse_snapshot_name
from ktest/tests/bcachefs/subvol.ktest?
> > If instead I try to create a new snapshot with a different name:
> > kernel BUG at fs/bcachefs/btree_key_cache.c:548!
>
> This one I don't seem to be able to reproduce.
I failed to reproduce in ktest until I deleted and recreated a few extra
snapshots in a loop. I'll follow up putting this in a proper patch against
ktest with a changelog and so on, but
test_subvol_snapshot_delete_repeat()
{
run_quiet "" bcachefs format -f --errors=panic /dev/sdb
mount -t bcachefs /dev/sdb /mnt
for i in $(seq 1 64); do
bcachefs subvolume snapshot /mnt/$i
bcachefs subvolume delete /mnt/$i
done
umount /mnt
}
reliably reproduces for me using the head of your tree at the time of
writing, 33a19429e827:
00015 ========= TEST subvol_snapshot_delete_repeat
00015
00015 bcachefs (sdb): recovering from clean shutdown, journal seq 4
00015 bcachefs (sdb): going read-write
00015 bcachefs (sdb): mounted with opts: errors=panic
00017 ------------[ cut here ]------------
00017 kernel BUG at fs/bcachefs/btree_key_cache.c:548!
00017 KGDB: Waiting for remote debugger
It fails on attempt 12 for me.
> > bcachefs subvolume delete only works on snapshot subvolumes:
>
> For the moment at least this is intended behaviour, since a regular
> subvolume can be removed with rm -rf. I've been meaning to check how btrfs
> does it, and match their behaviour wherever reasonable - do you know
> offhand what they do?
I don't know I'm afraid: it's been a few years since I tested out btrfs, and
I was mostly marvelling at the strange new magic of reflinks rather than
subvolumes and snapshots back then. :)
ENOENT (with strerror "No such file or directory") is quite a surprising
result for a user that isn't expecting it, especially as subvolume delete
'sounds' like the opposite of subvolume create.
I guess there's an argument that both snapshots and normal subvolumes can
be deleted by rm -r, and therefore subvolume delete is redundant anyway?
Best wishes,
Chris.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] [ktest] Test iterated snapshot create and delete with distinct names
2021-10-11 19:10 ` Chris Webb
@ 2021-10-11 21:07 ` Chris Webb
0 siblings, 0 replies; 6+ messages in thread
From: Chris Webb @ 2021-10-11 21:07 UTC (permalink / raw)
To: Kent Overstreet; +Cc: linux-bcachefs
This currently fails, triggering an oops at fs/bcachefs/btree_key_cache.c:548.
Signed-off-by: Chris Webb <chris@arachsys.com>
---
tests/bcachefs/subvol.ktest | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/tests/bcachefs/subvol.ktest b/tests/bcachefs/subvol.ktest
index 67a215b..22df37b 100644
--- a/tests/bcachefs/subvol.ktest
+++ b/tests/bcachefs/subvol.ktest
@@ -231,3 +231,15 @@ test_subvol_delete_snapshot_of_deleted_subvol()
umount /mnt
}
+
+# Fails
+test_subvol_snapshot_delete_repeat()
+{
+ run_quiet "" bcachefs format -f --errors=panic /dev/sdb
+ mount -t bcachefs /dev/sdb /mnt
+ for i in $(seq 1 64); do
+ bcachefs subvolume snapshot /mnt/$i
+ bcachefs subvolume delete /mnt/$i
+ done
+ umount /mnt
+}
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: rmdir() succeeds on an empty subvolume
2021-10-10 13:44 rmdir() succeeds on an empty subvolume Chris Webb
2021-10-10 18:55 ` Other (perhaps related) subvolume strangeness Chris Webb
@ 2021-10-11 19:20 ` Chris Webb
1 sibling, 0 replies; 6+ messages in thread
From: Chris Webb @ 2021-10-11 19:20 UTC (permalink / raw)
To: linux-bcachefs; +Cc: Kent Overstreet
Chris Webb <chris@arachsys.com> writes:
> If I create an empty snapshot in a bcachefs filesystem, I can then remove
> the snapshot directory causing the filesystem to drop into an emergency
> read-only state:
Following up for completeness: this one is fixed by 33a19429e827 which you
pushed earlier today. Thanks!
Best wishes,
Chris.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2021-10-11 21:07 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-10 13:44 rmdir() succeeds on an empty subvolume Chris Webb
2021-10-10 18:55 ` Other (perhaps related) subvolume strangeness Chris Webb
2021-10-11 15:44 ` Kent Overstreet
2021-10-11 19:10 ` Chris Webb
2021-10-11 21:07 ` [PATCH] [ktest] Test iterated snapshot create and delete with distinct names Chris Webb
2021-10-11 19:20 ` rmdir() succeeds on an empty subvolume Chris Webb
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).