* [PATCH v3 0/2] btrfs: scrub: fix scrub_lock
@ 2018-11-30 5:15 Anand Jain
2018-11-30 5:15 ` [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
2018-11-30 5:15 ` [PATCH v3 2/2] btrfs: scrub: add scrub_lock lockdep check in scrub_workers_get Anand Jain
0 siblings, 2 replies; 5+ messages in thread
From: Anand Jain @ 2018-11-30 5:15 UTC (permalink / raw)
To: linux-btrfs
v3: Drops the patch [1]from this set.
[1]
btrfs: scrub: maintain the unlock order in scrub thread
Fixes the circular locking dependency warning as in patch 1/2,
and patch 2/2 adds lockdep_assert_held() to scrub_workers_get().
Anand Jain (2):
btrfs: scrub: fix circular locking dependency warning
btrfs: add lockdep check for scrub_lock in scrub_workers_get
fs/btrfs/scrub.c | 22 +++++++++++-----------
3 files changed, 13 insertions(+), 13 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning
2018-11-30 5:15 [PATCH v3 0/2] btrfs: scrub: fix scrub_lock Anand Jain
@ 2018-11-30 5:15 ` Anand Jain
2018-12-04 11:16 ` David Sterba
2018-11-30 5:15 ` [PATCH v3 2/2] btrfs: scrub: add scrub_lock lockdep check in scrub_workers_get Anand Jain
1 sibling, 1 reply; 5+ messages in thread
From: Anand Jain @ 2018-11-30 5:15 UTC (permalink / raw)
To: linux-btrfs
Circular locking dependency check reports warning[1], that's because
the btrfs_scrub_dev() calls the stack #0 below with, the
fs_info::scrub_lock held. The test case leading to this warning..
mkfs.btrfs -fq /dev/sdb && mount /dev/sdb /btrfs
btrfs scrub start -B /btrfs
In fact we have fs_info::scrub_workers_refcnt to tack if the init and
destroy of the scrub workers are needed. So once we have incremented
and decremented the fs_info::scrub_workers_refcnt value in the thread,
its ok to drop the scrub_lock, and then actually do the
btrfs_destroy_workqueue() part. So this patch drops the scrub_lock
before calling btrfs_destroy_workqueue().
[1]
[ 76.146826] ======================================================
[ 76.147086] WARNING: possible circular locking dependency detected
[ 76.147316] 4.20.0-rc3+ #41 Not tainted
[ 76.147489] ------------------------------------------------------
[ 76.147722] btrfs/4065 is trying to acquire lock:
[ 76.147984] 0000000038593bc0 ((wq_completion)"%s-%s""btrfs",
name){+.+.}, at: flush_workqueue+0x70/0x4d0
[ 76.148337]
but task is already holding lock:
[ 76.148594] 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
btrfs_scrub_dev+0x316/0x5d0 [btrfs]
[ 76.148909]
which lock already depends on the new lock.
[ 76.149191]
the existing dependency chain (in reverse order) is:
[ 76.149446]
-> #3 (&fs_info->scrub_lock){+.+.}:
[ 76.149707] btrfs_scrub_dev+0x11f/0x5d0 [btrfs]
[ 76.149924] btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
[ 76.150216] do_vfs_ioctl+0xa9/0x6d0
[ 76.150468] ksys_ioctl+0x60/0x90
[ 76.150716] __x64_sys_ioctl+0x16/0x20
[ 76.150911] do_syscall_64+0x50/0x180
[ 76.151182] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 76.151469]
-> #2 (&fs_devs->device_list_mutex){+.+.}:
[ 76.151851] reada_start_machine_worker+0xca/0x3f0 [btrfs]
[ 76.152195] normal_work_helper+0xf0/0x4c0 [btrfs]
[ 76.152489] process_one_work+0x1f4/0x520
[ 76.152751] worker_thread+0x46/0x3d0
[ 76.153715] kthread+0xf8/0x130
[ 76.153912] ret_from_fork+0x3a/0x50
[ 76.154178]
-> #1 ((work_completion)(&work->normal_work)){+.+.}:
[ 76.154575] worker_thread+0x46/0x3d0
[ 76.154828] kthread+0xf8/0x130
[ 76.155108] ret_from_fork+0x3a/0x50
[ 76.155357]
-> #0 ((wq_completion)"%s-%s""btrfs", name){+.+.}:
[ 76.155751] flush_workqueue+0x9a/0x4d0
[ 76.155911] drain_workqueue+0xca/0x1a0
[ 76.156182] destroy_workqueue+0x17/0x230
[ 76.156455] btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
[ 76.156756] scrub_workers_put+0x2e/0x60 [btrfs]
[ 76.156931] btrfs_scrub_dev+0x329/0x5d0 [btrfs]
[ 76.157219] btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
[ 76.157491] do_vfs_ioctl+0xa9/0x6d0
[ 76.157742] ksys_ioctl+0x60/0x90
[ 76.157910] __x64_sys_ioctl+0x16/0x20
[ 76.158177] do_syscall_64+0x50/0x180
[ 76.158429] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 76.158716]
other info that might help us debug this:
[ 76.158908] Chain exists of:
(wq_completion)"%s-%s""btrfs", name --> &fs_devs->device_list_mutex
--> &fs_info->scrub_lock
[ 76.159629] Possible unsafe locking scenario:
[ 76.160607] CPU0 CPU1
[ 76.160934] ---- ----
[ 76.161210] lock(&fs_info->scrub_lock);
[ 76.161458]
lock(&fs_devs->device_list_mutex);
[ 76.161805]
lock(&fs_info->scrub_lock);
[ 76.161909] lock((wq_completion)"%s-%s""btrfs", name);
[ 76.162201]
*** DEADLOCK ***
[ 76.162627] 2 locks held by btrfs/4065:
[ 76.162897] #0: 00000000bef2775b (sb_writers#12){.+.+}, at:
mnt_want_write_file+0x24/0x50
[ 76.163335] #1: 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
btrfs_scrub_dev+0x316/0x5d0 [btrfs]
[ 76.163796]
stack backtrace:
[ 76.163911] CPU: 1 PID: 4065 Comm: btrfs Not tainted 4.20.0-rc3+ #41
[ 76.164228] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
VirtualBox 12/01/2006
[ 76.164646] Call Trace:
[ 76.164872] dump_stack+0x5e/0x8b
[ 76.165128] print_circular_bug.isra.37+0x1f1/0x1fe
[ 76.165398] __lock_acquire+0x14aa/0x1620
[ 76.165652] lock_acquire+0xb0/0x190
[ 76.165910] ? flush_workqueue+0x70/0x4d0
[ 76.166175] flush_workqueue+0x9a/0x4d0
[ 76.166420] ? flush_workqueue+0x70/0x4d0
[ 76.166671] ? drain_workqueue+0x52/0x1a0
[ 76.166911] drain_workqueue+0xca/0x1a0
[ 76.167167] destroy_workqueue+0x17/0x230
[ 76.167428] btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
[ 76.167720] scrub_workers_put+0x2e/0x60 [btrfs]
[ 76.168233] btrfs_scrub_dev+0x329/0x5d0 [btrfs]
[ 76.168504] ? __sb_start_write+0x121/0x1b0
[ 76.168759] ? mnt_want_write_file+0x24/0x50
[ 76.169654] btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
[ 76.169934] ? find_held_lock+0x2d/0x90
[ 76.170204] ? find_held_lock+0x2d/0x90
[ 76.170450] do_vfs_ioctl+0xa9/0x6d0
[ 76.170690] ? __fget+0x101/0x1f0
[ 76.170910] ? __fget+0x5/0x1f0
[ 76.171157] ksys_ioctl+0x60/0x90
[ 76.171391] __x64_sys_ioctl+0x16/0x20
[ 76.171634] do_syscall_64+0x50/0x180
[ 76.171892] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 76.172186] RIP: 0033:0x7f61d422e567
[ 76.172425] Code: 44 00 00 48 8b 05 29 09 2d 00 64 c7 00 26 00 00 00
48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f
05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f9 08 2d 00 f7 d8 64 89 01 48
[ 76.172911] RSP: 002b:00007f61d3936d68 EFLAGS: 00000246 ORIG_RAX:
0000000000000010
[ 76.173328] RAX: ffffffffffffffda RBX: 00000000019026b0 RCX:
00007f61d422e567
[ 76.173649] RDX: 00000000019026b0 RSI: 00000000c400941b RDI:
0000000000000003
[ 76.173909] RBP: 0000000000000000 R08: 00007f61d3937700 R09:
0000000000000000
[ 76.174244] R10: 00007f61d3937700 R11: 0000000000000246 R12:
0000000000000000
[ 76.174566] R13: 0000000000801000 R14: 0000000000000000 R15:
00007f61d3937700
[ 76.175217] btrfs (4065) used greatest stack depth: 11424 bytes left
Signed-off-by: Anand Jain <anand.jain@oracle.com>
---
v2->v3: none
v1->v2: none
fs/btrfs/scrub.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index b5a19ba38ab7..9ade0659f017 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3757,10 +3757,13 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
{
+ lockdep_assert_held(&fs_info->scrub_lock);
if (--fs_info->scrub_workers_refcnt == 0) {
+ mutex_unlock(&fs_info->scrub_lock);
btrfs_destroy_workqueue(fs_info->scrub_workers);
btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
+ mutex_lock(&fs_info->scrub_lock);
}
WARN_ON(fs_info->scrub_workers_refcnt < 0);
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 2/2] btrfs: scrub: add scrub_lock lockdep check in scrub_workers_get
2018-11-30 5:15 [PATCH v3 0/2] btrfs: scrub: fix scrub_lock Anand Jain
2018-11-30 5:15 ` [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
@ 2018-11-30 5:15 ` Anand Jain
1 sibling, 0 replies; 5+ messages in thread
From: Anand Jain @ 2018-11-30 5:15 UTC (permalink / raw)
To: linux-btrfs
scrub_workers_refcnt is protected by scrub_lock, add lockdep_assert_held()
function in scrub_workers_get().
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Suggested-by: Nikolay Borisov <nborisov@suse.com>
---
v3: none
v2: born
fs/btrfs/scrub.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 9ade0659f017..84ef1f0d371e 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3726,6 +3726,8 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
unsigned int flags = WQ_FREEZABLE | WQ_UNBOUND;
int max_active = fs_info->thread_pool_size;
+ lockdep_assert_held(&fs_info->scrub_lock);
+
if (fs_info->scrub_workers_refcnt == 0) {
fs_info->scrub_workers = btrfs_alloc_workqueue(fs_info, "scrub",
flags, is_dev_replace ? 1 : max_active, 4);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning
2018-11-30 5:15 ` [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
@ 2018-12-04 11:16 ` David Sterba
2018-12-13 2:12 ` Anand Jain
0 siblings, 1 reply; 5+ messages in thread
From: David Sterba @ 2018-12-04 11:16 UTC (permalink / raw)
To: Anand Jain; +Cc: linux-btrfs
On Fri, Nov 30, 2018 at 01:15:23PM +0800, Anand Jain wrote:
> @@ -3757,10 +3757,13 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
>
> static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
> {
> + lockdep_assert_held(&fs_info->scrub_lock);
> if (--fs_info->scrub_workers_refcnt == 0) {
> + mutex_unlock(&fs_info->scrub_lock);
> btrfs_destroy_workqueue(fs_info->scrub_workers);
> btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
> btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
> + mutex_lock(&fs_info->scrub_lock);
> }
> WARN_ON(fs_info->scrub_workers_refcnt < 0);
> }
btrfs/011 lockdep warning is gone, but now there's a list corruption
reported by btrfs/073. I'm testing the 2 patches on top of current
master to avoid interference with misc-next patches.
btrfs/073 [11:07:19]
[ 3580.466293] run fstests btrfs/073at 2018-12-04 11:07:19
[ 3580.610367] BTRFS info (device vda): disk space caching is enabled
[ 3580.612809] BTRFS info (device vda): has skinny extents
[ 3580.876639] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 1 transid 5 /dev/vdb
[ 3580.880569] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 2 transid 5 /dev/vdc
[ 3580.882947] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 3 transid 5 /dev/vdd
[ 3580.885499] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 4 transid 5 /dev/vde
[ 3580.887972] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 5 transid 5 /dev/vdf
[ 3580.890394] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 6 transid 5 /dev/vdg
[ 3580.893729] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 7 transid 5 /dev/vdh
[ 3580.903180] BTRFS info (device vdb): disk space caching is enabled
[ 3580.904538] BTRFS info (device vdb): has skinny extents
[ 3580.905322] BTRFS info (device vdb): flagging fs with big metadatafeature
[ 3580.908555] BTRFS info (device vdb): checking UUID tree
[ 3580.951440] ------------[ cut here ]------------
[ 3580.954122] list_add corruption. prev->next should be next(ffffa189faa9acc8), but was ffffa189faa9adc0. (prev=ffffa189faa9a480).
[ 3580.960061] WARNING: CPU: 0 PID: 24578 at lib/list_debug.c:28__list_add_valid+0x4d/0x70
[ 3580.962346] BTRFS info (device vdb): use no compression, level 0
[ 3580.963694] Modules linked in: dm_flakey dm_mod btrfs libcrc32c xorzstd_decompress zstd_compress xxhash raid6_pq loop
[ 3580.963702] CPU: 0 PID: 24578 Comm: btrfs Not tainted4.20.0-rc5-default+ #366
[ 3580.963703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 3580.963706] RIP: 0010:__list_add_valid+0x4d/0x70
[ 3580.963708] RSP: 0018:ffffb72d88817b90 EFLAGS: 00010286
[ 3580.963709] RAX: 0000000000000000 RBX: ffffb72d88817c10 RCX:0000000000000000
[ 3580.963710] RDX: 0000000000000002 RSI: 0000000000000001 RDI:ffffffff860c4c1d
[ 3580.963710] RBP: ffffa189faa9acc8 R08: 0000000000000001 R09:0000000000000000
[ 3580.963714] R10: 0000000000000000 R11: ffffffff882a0a2d R12:ffffa189faa9ac70
[ 3580.966020] BTRFS info (device vdb): disk space caching is enabled
[ 3580.967721] R13: ffffa189faa9a480 R14: ffffa189faa9ac70 R15:ffffa189faa9ac78
[ 3580.967724] FS: 00007f04289d1700(0000) GS:ffffa189fd400000(0000)knlGS:0000000000000000
[ 3580.967725] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3580.967725] CR2: 00007f7b58b034ac CR3: 00000000699d9000 CR4:00000000000006f0
[ 3580.967729] Call Trace:
[ 3580.967736] __mutex_add_waiter+0x34/0x70
[ 3580.967743] ? drain_workqueue+0x1e/0x180
[ 3580.994465] __mutex_lock+0x134/0x9d0
[ 3580.995526] ? __schedule+0x2eb/0xb20
[ 3580.996584] ? drain_workqueue+0x1e/0x180
[ 3580.997727] drain_workqueue+0x1e/0x180
[ 3580.998793] destroy_workqueue+0x17/0x240
[ 3580.999879] btrfs_destroy_workqueue+0x57/0x200 [btrfs]
[ 3581.001148] scrub_workers_put+0x6c/0x90 [btrfs]
[ 3581.002257] btrfs_scrub_dev+0x2f6/0x590 [btrfs]
[ 3581.003370] ? __sb_start_write+0x12c/0x1d0
[ 3581.004450] ? mnt_want_write_file+0x24/0x60
[ 3581.005613] btrfs_ioctl+0xfc7/0x2f00 [btrfs]
[ 3581.006361] ? get_task_io_context+0x2d/0x90
[ 3581.007025] ? do_vfs_ioctl+0xa2/0x6d0
[ 3581.007699] do_vfs_ioctl+0xa2/0x6d0
[ 3581.008603] ? __fget+0x109/0x1e0
[ 3581.009413] ksys_ioctl+0x3a/0x70
[ 3581.010326] __x64_sys_ioctl+0x16/0x20
[ 3581.011234] do_syscall_64+0x54/0x180
[ 3581.012143] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 3581.013213] RIP: 0033:0x7f042bac9aa7
[ 3581.014133] Code: 00 00 90 48 8b 05 f1 83 2c 00 64 c7 00 26 00 00 0048 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c1 83 2c 00 f7 d8 64 89 01 48
[ 3581.018486] RSP: 002b:00007f04289d0d38 EFLAGS: 00000246 ORIG_RAX:0000000000000010
[ 3581.019703] RAX: ffffffffffffffda RBX: 000055c05475de10 RCX:00007f042bac9aa7
[ 3581.021465] RDX: 000055c05475de10 RSI: 00000000c400941b RDI:0000000000000003
[ 3581.023120] RBP: 0000000000000000 R08: 00007f04289d1700 R09:0000000000000000
[ 3581.024968] R10: 00007f04289d1700 R11: 0000000000000246 R12:00007ffd6b29a03e
[ 3581.027977] R13: 00007ffd6b29a03f R14: 00007ffd6b29a040 R15:0000000000000000
[ 3581.029767] irq event stamp: 0
[ 3581.030631] hardirqs last enabled at (0): [<0000000000000000>](null)
[ 3581.032630] hardirqs last disabled at (0): [<ffffffff8605c23b>]copy_process.part.72+0x86b/0x1e20
[ 3581.034970] softirqs last enabled at (0): [<ffffffff8605c23b>]copy_process.part.72+0x86b/0x1e20
[ 3581.037224] softirqs last disabled at (0): [<0000000000000000>](null)
[ 3581.038955] ---[ end trace f0e217183915884a ]---
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning
2018-12-04 11:16 ` David Sterba
@ 2018-12-13 2:12 ` Anand Jain
0 siblings, 0 replies; 5+ messages in thread
From: Anand Jain @ 2018-12-13 2:12 UTC (permalink / raw)
To: dsterba, linux-btrfs
On 12/04/2018 07:16 PM, David Sterba wrote:
> On Fri, Nov 30, 2018 at 01:15:23PM +0800, Anand Jain wrote:
>> @@ -3757,10 +3757,13 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
>>
>> static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
>> {
>> + lockdep_assert_held(&fs_info->scrub_lock);
[1]
>> if (--fs_info->scrub_workers_refcnt == 0) {
>> + mutex_unlock(&fs_info->scrub_lock);
>> btrfs_destroy_workqueue(fs_info->scrub_workers);
>> btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
>> btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
>> + mutex_lock(&fs_info->scrub_lock);
>> }
>> WARN_ON(fs_info->scrub_workers_refcnt < 0);
>> }
>
> btrfs/011 lockdep warning is gone, but now there's a list corruption
> reported by btrfs/073. I'm testing the 2 patches on top of current
> master to avoid interference with misc-next patches.
Sorry for the delay due to my vacation.
Thanks for the report, btrfs/073 exposed this on your test setup,
it didn't on mine.
I think I know what is happening.. now as scrub_worker_refcnt is zero
[1] (above) and scrub_lock is released, the scrub_workers_get() can
create a new worker even before the btrfs_destroy_workqueue() is
completed. I am trying to prove this theory with traps.
scrub threads has week synchronization, and assumes btrfs-progs is the
only process which can call scrub-ioctl, apparently its not true.
To fix it needs a bit of complete revival which I was trying to avoid.
Let me try if there is a better fix for this.
Thanks, Anand
> btrfs/073 [11:07:19]
> [ 3580.466293] run fstests btrfs/073at 2018-12-04 11:07:19
> [ 3580.610367] BTRFS info (device vda): disk space caching is enabled
> [ 3580.612809] BTRFS info (device vda): has skinny extents
> [ 3580.876639] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 1 transid 5 /dev/vdb
> [ 3580.880569] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 2 transid 5 /dev/vdc
> [ 3580.882947] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 3 transid 5 /dev/vdd
> [ 3580.885499] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 4 transid 5 /dev/vde
> [ 3580.887972] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 5 transid 5 /dev/vdf
> [ 3580.890394] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 6 transid 5 /dev/vdg
> [ 3580.893729] BTRFS: device fsid d452261d-c956-4b54-aab9-8318c3c211fcdevid 7 transid 5 /dev/vdh
> [ 3580.903180] BTRFS info (device vdb): disk space caching is enabled
> [ 3580.904538] BTRFS info (device vdb): has skinny extents
> [ 3580.905322] BTRFS info (device vdb): flagging fs with big metadatafeature
> [ 3580.908555] BTRFS info (device vdb): checking UUID tree
> [ 3580.951440] ------------[ cut here ]------------
> [ 3580.954122] list_add corruption. prev->next should be next(ffffa189faa9acc8), but was ffffa189faa9adc0. (prev=ffffa189faa9a480).
> [ 3580.960061] WARNING: CPU: 0 PID: 24578 at lib/list_debug.c:28__list_add_valid+0x4d/0x70
> [ 3580.962346] BTRFS info (device vdb): use no compression, level 0
> [ 3580.963694] Modules linked in: dm_flakey dm_mod btrfs libcrc32c xorzstd_decompress zstd_compress xxhash raid6_pq loop
> [ 3580.963702] CPU: 0 PID: 24578 Comm: btrfs Not tainted4.20.0-rc5-default+ #366
> [ 3580.963703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
> [ 3580.963706] RIP: 0010:__list_add_valid+0x4d/0x70
> [ 3580.963708] RSP: 0018:ffffb72d88817b90 EFLAGS: 00010286
> [ 3580.963709] RAX: 0000000000000000 RBX: ffffb72d88817c10 RCX:0000000000000000
> [ 3580.963710] RDX: 0000000000000002 RSI: 0000000000000001 RDI:ffffffff860c4c1d
> [ 3580.963710] RBP: ffffa189faa9acc8 R08: 0000000000000001 R09:0000000000000000
> [ 3580.963714] R10: 0000000000000000 R11: ffffffff882a0a2d R12:ffffa189faa9ac70
> [ 3580.966020] BTRFS info (device vdb): disk space caching is enabled
> [ 3580.967721] R13: ffffa189faa9a480 R14: ffffa189faa9ac70 R15:ffffa189faa9ac78
> [ 3580.967724] FS: 00007f04289d1700(0000) GS:ffffa189fd400000(0000)knlGS:0000000000000000
> [ 3580.967725] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 3580.967725] CR2: 00007f7b58b034ac CR3: 00000000699d9000 CR4:00000000000006f0
> [ 3580.967729] Call Trace:
> [ 3580.967736] __mutex_add_waiter+0x34/0x70
> [ 3580.967743] ? drain_workqueue+0x1e/0x180
> [ 3580.994465] __mutex_lock+0x134/0x9d0
> [ 3580.995526] ? __schedule+0x2eb/0xb20
> [ 3580.996584] ? drain_workqueue+0x1e/0x180
> [ 3580.997727] drain_workqueue+0x1e/0x180
> [ 3580.998793] destroy_workqueue+0x17/0x240
> [ 3580.999879] btrfs_destroy_workqueue+0x57/0x200 [btrfs]
> [ 3581.001148] scrub_workers_put+0x6c/0x90 [btrfs]
> [ 3581.002257] btrfs_scrub_dev+0x2f6/0x590 [btrfs]
> [ 3581.003370] ? __sb_start_write+0x12c/0x1d0
> [ 3581.004450] ? mnt_want_write_file+0x24/0x60
> [ 3581.005613] btrfs_ioctl+0xfc7/0x2f00 [btrfs]
> [ 3581.006361] ? get_task_io_context+0x2d/0x90
> [ 3581.007025] ? do_vfs_ioctl+0xa2/0x6d0
> [ 3581.007699] do_vfs_ioctl+0xa2/0x6d0
> [ 3581.008603] ? __fget+0x109/0x1e0
> [ 3581.009413] ksys_ioctl+0x3a/0x70
> [ 3581.010326] __x64_sys_ioctl+0x16/0x20
> [ 3581.011234] do_syscall_64+0x54/0x180
> [ 3581.012143] entry_SYSCALL_64_after_hwframe+0x49/0xbe
> [ 3581.013213] RIP: 0033:0x7f042bac9aa7
> [ 3581.014133] Code: 00 00 90 48 8b 05 f1 83 2c 00 64 c7 00 26 00 00 0048 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c1 83 2c 00 f7 d8 64 89 01 48
> [ 3581.018486] RSP: 002b:00007f04289d0d38 EFLAGS: 00000246 ORIG_RAX:0000000000000010
> [ 3581.019703] RAX: ffffffffffffffda RBX: 000055c05475de10 RCX:00007f042bac9aa7
> [ 3581.021465] RDX: 000055c05475de10 RSI: 00000000c400941b RDI:0000000000000003
> [ 3581.023120] RBP: 0000000000000000 R08: 00007f04289d1700 R09:0000000000000000
> [ 3581.024968] R10: 00007f04289d1700 R11: 0000000000000246 R12:00007ffd6b29a03e
> [ 3581.027977] R13: 00007ffd6b29a03f R14: 00007ffd6b29a040 R15:0000000000000000
> [ 3581.029767] irq event stamp: 0
> [ 3581.030631] hardirqs last enabled at (0): [<0000000000000000>](null)
> [ 3581.032630] hardirqs last disabled at (0): [<ffffffff8605c23b>]copy_process.part.72+0x86b/0x1e20
> [ 3581.034970] softirqs last enabled at (0): [<ffffffff8605c23b>]copy_process.part.72+0x86b/0x1e20
> [ 3581.037224] softirqs last disabled at (0): [<0000000000000000>](null)
> [ 3581.038955] ---[ end trace f0e217183915884a ]---
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-12-13 2:12 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-30 5:15 [PATCH v3 0/2] btrfs: scrub: fix scrub_lock Anand Jain
2018-11-30 5:15 ` [PATCH v3 1/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
2018-12-04 11:16 ` David Sterba
2018-12-13 2:12 ` Anand Jain
2018-11-30 5:15 ` [PATCH v3 2/2] btrfs: scrub: add scrub_lock lockdep check in scrub_workers_get Anand Jain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).