All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] btrfs: scrub: fix scrub_lock
@ 2018-11-26  9:07 Anand Jain
  2018-11-26  9:07 ` [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread Anand Jain
  2018-11-26  9:07 ` [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
  0 siblings, 2 replies; 10+ messages in thread
From: Anand Jain @ 2018-11-26  9:07 UTC (permalink / raw)
  To: linux-btrfs

Anand Jain (2):
  btrfs: scrub: maintain the unlock order in scrub thread
  btrfs: scrub: fix circular locking dependency warning

 fs/btrfs/scrub.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread
  2018-11-26  9:07 [PATCH 0/2] btrfs: scrub: fix scrub_lock Anand Jain
@ 2018-11-26  9:07 ` Anand Jain
  2018-11-26  9:47   ` Nikolay Borisov
  2018-11-26  9:07 ` [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
  1 sibling, 1 reply; 10+ messages in thread
From: Anand Jain @ 2018-11-26  9:07 UTC (permalink / raw)
  To: linux-btrfs

The fs_info::device_list_mutex and fs_info::scrub_lock creates a
nested locks in btrfs_scrub_dev(). During the lock acquire the
hierarchy is fs_info::device_list_mutex and then fs_info::scrub_lock,
so following the same reverse order during unlock, that is
fs_info::scrub_lock and then fs_info::device_list_mutex.

Signed-off-by: Anand Jain <anand.jain@oracle.com>
---
 fs/btrfs/scrub.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 902819d3cf41..b1c2d1cdbd4b 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3865,7 +3865,6 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
 	}
 	sctx->readonly = readonly;
 	dev->scrub_ctx = sctx;
-	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
 
 	/*
 	 * checking @scrub_pause_req here, we can avoid
@@ -3875,15 +3874,14 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
 	atomic_inc(&fs_info->scrubs_running);
 	mutex_unlock(&fs_info->scrub_lock);
 
-	if (!is_dev_replace) {
-		/*
-		 * by holding device list mutex, we can
-		 * kick off writing super in log tree sync.
-		 */
-		mutex_lock(&fs_info->fs_devices->device_list_mutex);
+	/*
+	 * by holding device list mutex, we can kick off writing super in log
+	 * tree sync.
+	 */
+	if (!is_dev_replace)
 		ret = scrub_supers(sctx, dev);
-		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
-	}
+
+	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
 
 	if (!ret)
 		ret = scrub_enumerate_chunks(sctx, dev, start, end);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning
  2018-11-26  9:07 [PATCH 0/2] btrfs: scrub: fix scrub_lock Anand Jain
  2018-11-26  9:07 ` [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread Anand Jain
@ 2018-11-26  9:07 ` Anand Jain
  2018-11-26  9:59   ` Nikolay Borisov
  1 sibling, 1 reply; 10+ messages in thread
From: Anand Jain @ 2018-11-26  9:07 UTC (permalink / raw)
  To: linux-btrfs

Circular locking dependency check reports warning[1], that's because
the btrfs_scrub_dev() calls the stack #0 below with, the
fs_info::scrub_lock held. The test case leading to this warning..

  mkfs.btrfs -fq /dev/sdb && mount /dev/sdb /btrfs
  btrfs scrub start -B /btrfs

In fact we have fs_info::scrub_workers_refcnt to tack if the init and
destroy of the scrub workers are needed. So once we have incremented
and decremented the fs_info::scrub_workers_refcnt value in the thread,
its ok to drop the scrub_lock, and then actually do the
btrfs_destroy_workqueue() part. So this patch drops the scrub_lock
before calling btrfs_destroy_workqueue().

[1]
[   76.146826] ======================================================
[   76.147086] WARNING: possible circular locking dependency detected
[   76.147316] 4.20.0-rc3+ #41 Not tainted
[   76.147489] ------------------------------------------------------
[   76.147722] btrfs/4065 is trying to acquire lock:
[   76.147984] 0000000038593bc0 ((wq_completion)"%s-%s""btrfs",
name){+.+.}, at: flush_workqueue+0x70/0x4d0
[   76.148337]
but task is already holding lock:
[   76.148594] 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
btrfs_scrub_dev+0x316/0x5d0 [btrfs]
[   76.148909]
which lock already depends on the new lock.

[   76.149191]
the existing dependency chain (in reverse order) is:
[   76.149446]
-> #3 (&fs_info->scrub_lock){+.+.}:
[   76.149707]        btrfs_scrub_dev+0x11f/0x5d0 [btrfs]
[   76.149924]        btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
[   76.150216]        do_vfs_ioctl+0xa9/0x6d0
[   76.150468]        ksys_ioctl+0x60/0x90
[   76.150716]        __x64_sys_ioctl+0x16/0x20
[   76.150911]        do_syscall_64+0x50/0x180
[   76.151182]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[   76.151469]
-> #2 (&fs_devs->device_list_mutex){+.+.}:
[   76.151851]        reada_start_machine_worker+0xca/0x3f0 [btrfs]
[   76.152195]        normal_work_helper+0xf0/0x4c0 [btrfs]
[   76.152489]        process_one_work+0x1f4/0x520
[   76.152751]        worker_thread+0x46/0x3d0
[   76.153715]        kthread+0xf8/0x130
[   76.153912]        ret_from_fork+0x3a/0x50
[   76.154178]
-> #1 ((work_completion)(&work->normal_work)){+.+.}:
[   76.154575]        worker_thread+0x46/0x3d0
[   76.154828]        kthread+0xf8/0x130
[   76.155108]        ret_from_fork+0x3a/0x50
[   76.155357]
-> #0 ((wq_completion)"%s-%s""btrfs", name){+.+.}:
[   76.155751]        flush_workqueue+0x9a/0x4d0
[   76.155911]        drain_workqueue+0xca/0x1a0
[   76.156182]        destroy_workqueue+0x17/0x230
[   76.156455]        btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
[   76.156756]        scrub_workers_put+0x2e/0x60 [btrfs]
[   76.156931]        btrfs_scrub_dev+0x329/0x5d0 [btrfs]
[   76.157219]        btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
[   76.157491]        do_vfs_ioctl+0xa9/0x6d0
[   76.157742]        ksys_ioctl+0x60/0x90
[   76.157910]        __x64_sys_ioctl+0x16/0x20
[   76.158177]        do_syscall_64+0x50/0x180
[   76.158429]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[   76.158716]
other info that might help us debug this:

[   76.158908] Chain exists of:
  (wq_completion)"%s-%s""btrfs", name --> &fs_devs->device_list_mutex
--> &fs_info->scrub_lock

[   76.159629]  Possible unsafe locking scenario:

[   76.160607]        CPU0                    CPU1
[   76.160934]        ----                    ----
[   76.161210]   lock(&fs_info->scrub_lock);
[   76.161458]
lock(&fs_devs->device_list_mutex);
[   76.161805]
lock(&fs_info->scrub_lock);
[   76.161909]   lock((wq_completion)"%s-%s""btrfs", name);
[   76.162201]
 *** DEADLOCK ***

[   76.162627] 2 locks held by btrfs/4065:
[   76.162897]  #0: 00000000bef2775b (sb_writers#12){.+.+}, at:
mnt_want_write_file+0x24/0x50
[   76.163335]  #1: 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
btrfs_scrub_dev+0x316/0x5d0 [btrfs]
[   76.163796]
stack backtrace:
[   76.163911] CPU: 1 PID: 4065 Comm: btrfs Not tainted 4.20.0-rc3+ #41
[   76.164228] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
VirtualBox 12/01/2006
[   76.164646] Call Trace:
[   76.164872]  dump_stack+0x5e/0x8b
[   76.165128]  print_circular_bug.isra.37+0x1f1/0x1fe
[   76.165398]  __lock_acquire+0x14aa/0x1620
[   76.165652]  lock_acquire+0xb0/0x190
[   76.165910]  ? flush_workqueue+0x70/0x4d0
[   76.166175]  flush_workqueue+0x9a/0x4d0
[   76.166420]  ? flush_workqueue+0x70/0x4d0
[   76.166671]  ? drain_workqueue+0x52/0x1a0
[   76.166911]  drain_workqueue+0xca/0x1a0
[   76.167167]  destroy_workqueue+0x17/0x230
[   76.167428]  btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
[   76.167720]  scrub_workers_put+0x2e/0x60 [btrfs]
[   76.168233]  btrfs_scrub_dev+0x329/0x5d0 [btrfs]
[   76.168504]  ? __sb_start_write+0x121/0x1b0
[   76.168759]  ? mnt_want_write_file+0x24/0x50
[   76.169654]  btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
[   76.169934]  ? find_held_lock+0x2d/0x90
[   76.170204]  ? find_held_lock+0x2d/0x90
[   76.170450]  do_vfs_ioctl+0xa9/0x6d0
[   76.170690]  ? __fget+0x101/0x1f0
[   76.170910]  ? __fget+0x5/0x1f0
[   76.171157]  ksys_ioctl+0x60/0x90
[   76.171391]  __x64_sys_ioctl+0x16/0x20
[   76.171634]  do_syscall_64+0x50/0x180
[   76.171892]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[   76.172186] RIP: 0033:0x7f61d422e567
[   76.172425] Code: 44 00 00 48 8b 05 29 09 2d 00 64 c7 00 26 00 00 00
48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f
05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f9 08 2d 00 f7 d8 64 89 01 48
[   76.172911] RSP: 002b:00007f61d3936d68 EFLAGS: 00000246 ORIG_RAX:
0000000000000010
[   76.173328] RAX: ffffffffffffffda RBX: 00000000019026b0 RCX:
00007f61d422e567
[   76.173649] RDX: 00000000019026b0 RSI: 00000000c400941b RDI:
0000000000000003
[   76.173909] RBP: 0000000000000000 R08: 00007f61d3937700 R09:
0000000000000000
[   76.174244] R10: 00007f61d3937700 R11: 0000000000000246 R12:
0000000000000000
[   76.174566] R13: 0000000000801000 R14: 0000000000000000 R15:
00007f61d3937700
[   76.175217] btrfs (4065) used greatest stack depth: 11424 bytes left

Signed-off-by: Anand Jain <anand.jain@oracle.com>
---
 fs/btrfs/scrub.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index b1c2d1cdbd4b..3fc31eeef2a7 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3757,10 +3757,13 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
 
 static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
 {
+	lockdep_assert_held(&fs_info->scrub_lock);
 	if (--fs_info->scrub_workers_refcnt == 0) {
+		mutex_unlock(&fs_info->scrub_lock);
 		btrfs_destroy_workqueue(fs_info->scrub_workers);
 		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
 		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
+		mutex_lock(&fs_info->scrub_lock);
 	}
 	WARN_ON(fs_info->scrub_workers_refcnt < 0);
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread
  2018-11-26  9:07 ` [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread Anand Jain
@ 2018-11-26  9:47   ` Nikolay Borisov
  2018-11-27 13:37     ` Anand Jain
  0 siblings, 1 reply; 10+ messages in thread
From: Nikolay Borisov @ 2018-11-26  9:47 UTC (permalink / raw)
  To: Anand Jain, linux-btrfs



On 26.11.18 г. 11:07 ч., Anand Jain wrote:
> The fs_info::device_list_mutex and fs_info::scrub_lock creates a
> nested locks in btrfs_scrub_dev(). During the lock acquire the
> hierarchy is fs_info::device_list_mutex and then fs_info::scrub_lock,
> so following the same reverse order during unlock, that is
> fs_info::scrub_lock and then fs_info::device_list_mutex.
> 
> Signed-off-by: Anand Jain <anand.jain@oracle.com>
> ---
>  fs/btrfs/scrub.c | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 902819d3cf41..b1c2d1cdbd4b 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3865,7 +3865,6 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>  	}
>  	sctx->readonly = readonly;
>  	dev->scrub_ctx = sctx;
> -	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
>  
>  	/*
>  	 * checking @scrub_pause_req here, we can avoid
> @@ -3875,15 +3874,14 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>  	atomic_inc(&fs_info->scrubs_running);
>  	mutex_unlock(&fs_info->scrub_lock);
>  
> -	if (!is_dev_replace) {
> -		/*
> -		 * by holding device list mutex, we can
> -		 * kick off writing super in log tree sync.
> -		 */
> -		mutex_lock(&fs_info->fs_devices->device_list_mutex);
> +	/*
> +	 * by holding device list mutex, we can kick off writing super in log
> +	 * tree sync.
> +	 */
> +	if (!is_dev_replace)
>  		ret = scrub_supers(sctx, dev);
> -		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
> -	}
> +
> +	mutex_unlock(&fs_info->fs_devices->device_list_mutex);

Have you considered whether this change will have any negative impact
due to the fact that __scrtub_blocked_if_needed can go to sleep for
arbitrary time with device_list_mutex held now ?

>  
>  	if (!ret)
>  		ret = scrub_enumerate_chunks(sctx, dev, start, end);
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning
  2018-11-26  9:07 ` [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
@ 2018-11-26  9:59   ` Nikolay Borisov
  2018-11-28  8:47     ` Anand Jain
  0 siblings, 1 reply; 10+ messages in thread
From: Nikolay Borisov @ 2018-11-26  9:59 UTC (permalink / raw)
  To: Anand Jain, linux-btrfs



On 26.11.18 г. 11:07 ч., Anand Jain wrote:
> Circular locking dependency check reports warning[1], that's because
> the btrfs_scrub_dev() calls the stack #0 below with, the
> fs_info::scrub_lock held. The test case leading to this warning..
> 
>   mkfs.btrfs -fq /dev/sdb && mount /dev/sdb /btrfs
>   btrfs scrub start -B /btrfs
> 
> In fact we have fs_info::scrub_workers_refcnt to tack if the init and
> destroy of the scrub workers are needed. So once we have incremented
> and decremented the fs_info::scrub_workers_refcnt value in the thread,
> its ok to drop the scrub_lock, and then actually do the
> btrfs_destroy_workqueue() part. So this patch drops the scrub_lock
> before calling btrfs_destroy_workqueue().
> 
> [1]
> [   76.146826] ======================================================
> [   76.147086] WARNING: possible circular locking dependency detected
> [   76.147316] 4.20.0-rc3+ #41 Not tainted
> [   76.147489] ------------------------------------------------------
> [   76.147722] btrfs/4065 is trying to acquire lock:
> [   76.147984] 0000000038593bc0 ((wq_completion)"%s-%s""btrfs",
> name){+.+.}, at: flush_workqueue+0x70/0x4d0
> [   76.148337]
> but task is already holding lock:
> [   76.148594] 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
> btrfs_scrub_dev+0x316/0x5d0 [btrfs]
> [   76.148909]
> which lock already depends on the new lock.
> 
> [   76.149191]
> the existing dependency chain (in reverse order) is:
> [   76.149446]
> -> #3 (&fs_info->scrub_lock){+.+.}:
> [   76.149707]        btrfs_scrub_dev+0x11f/0x5d0 [btrfs]
> [   76.149924]        btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
> [   76.150216]        do_vfs_ioctl+0xa9/0x6d0
> [   76.150468]        ksys_ioctl+0x60/0x90
> [   76.150716]        __x64_sys_ioctl+0x16/0x20
> [   76.150911]        do_syscall_64+0x50/0x180
> [   76.151182]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
> [   76.151469]
> -> #2 (&fs_devs->device_list_mutex){+.+.}:
> [   76.151851]        reada_start_machine_worker+0xca/0x3f0 [btrfs]
> [   76.152195]        normal_work_helper+0xf0/0x4c0 [btrfs]
> [   76.152489]        process_one_work+0x1f4/0x520
> [   76.152751]        worker_thread+0x46/0x3d0
> [   76.153715]        kthread+0xf8/0x130
> [   76.153912]        ret_from_fork+0x3a/0x50
> [   76.154178]
> -> #1 ((work_completion)(&work->normal_work)){+.+.}:
> [   76.154575]        worker_thread+0x46/0x3d0
> [   76.154828]        kthread+0xf8/0x130
> [   76.155108]        ret_from_fork+0x3a/0x50
> [   76.155357]
> -> #0 ((wq_completion)"%s-%s""btrfs", name){+.+.}:
> [   76.155751]        flush_workqueue+0x9a/0x4d0
> [   76.155911]        drain_workqueue+0xca/0x1a0
> [   76.156182]        destroy_workqueue+0x17/0x230
> [   76.156455]        btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
> [   76.156756]        scrub_workers_put+0x2e/0x60 [btrfs]
> [   76.156931]        btrfs_scrub_dev+0x329/0x5d0 [btrfs]
> [   76.157219]        btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
> [   76.157491]        do_vfs_ioctl+0xa9/0x6d0
> [   76.157742]        ksys_ioctl+0x60/0x90
> [   76.157910]        __x64_sys_ioctl+0x16/0x20
> [   76.158177]        do_syscall_64+0x50/0x180
> [   76.158429]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
> [   76.158716]
> other info that might help us debug this:
> 
> [   76.158908] Chain exists of:
>   (wq_completion)"%s-%s""btrfs", name --> &fs_devs->device_list_mutex
> --> &fs_info->scrub_lock
> 
> [   76.159629]  Possible unsafe locking scenario:
> 
> [   76.160607]        CPU0                    CPU1
> [   76.160934]        ----                    ----
> [   76.161210]   lock(&fs_info->scrub_lock);
> [   76.161458]
> lock(&fs_devs->device_list_mutex);
> [   76.161805]
> lock(&fs_info->scrub_lock);
> [   76.161909]   lock((wq_completion)"%s-%s""btrfs", name);
> [   76.162201]
>  *** DEADLOCK ***
> 
> [   76.162627] 2 locks held by btrfs/4065:
> [   76.162897]  #0: 00000000bef2775b (sb_writers#12){.+.+}, at:
> mnt_want_write_file+0x24/0x50
> [   76.163335]  #1: 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
> btrfs_scrub_dev+0x316/0x5d0 [btrfs]
> [   76.163796]
> stack backtrace:
> [   76.163911] CPU: 1 PID: 4065 Comm: btrfs Not tainted 4.20.0-rc3+ #41
> [   76.164228] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
> VirtualBox 12/01/2006
> [   76.164646] Call Trace:
> [   76.164872]  dump_stack+0x5e/0x8b
> [   76.165128]  print_circular_bug.isra.37+0x1f1/0x1fe
> [   76.165398]  __lock_acquire+0x14aa/0x1620
> [   76.165652]  lock_acquire+0xb0/0x190
> [   76.165910]  ? flush_workqueue+0x70/0x4d0
> [   76.166175]  flush_workqueue+0x9a/0x4d0
> [   76.166420]  ? flush_workqueue+0x70/0x4d0
> [   76.166671]  ? drain_workqueue+0x52/0x1a0
> [   76.166911]  drain_workqueue+0xca/0x1a0
> [   76.167167]  destroy_workqueue+0x17/0x230
> [   76.167428]  btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
> [   76.167720]  scrub_workers_put+0x2e/0x60 [btrfs]
> [   76.168233]  btrfs_scrub_dev+0x329/0x5d0 [btrfs]
> [   76.168504]  ? __sb_start_write+0x121/0x1b0
> [   76.168759]  ? mnt_want_write_file+0x24/0x50
> [   76.169654]  btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
> [   76.169934]  ? find_held_lock+0x2d/0x90
> [   76.170204]  ? find_held_lock+0x2d/0x90
> [   76.170450]  do_vfs_ioctl+0xa9/0x6d0
> [   76.170690]  ? __fget+0x101/0x1f0
> [   76.170910]  ? __fget+0x5/0x1f0
> [   76.171157]  ksys_ioctl+0x60/0x90
> [   76.171391]  __x64_sys_ioctl+0x16/0x20
> [   76.171634]  do_syscall_64+0x50/0x180
> [   76.171892]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
> [   76.172186] RIP: 0033:0x7f61d422e567
> [   76.172425] Code: 44 00 00 48 8b 05 29 09 2d 00 64 c7 00 26 00 00 00
> 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f
> 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f9 08 2d 00 f7 d8 64 89 01 48
> [   76.172911] RSP: 002b:00007f61d3936d68 EFLAGS: 00000246 ORIG_RAX:
> 0000000000000010
> [   76.173328] RAX: ffffffffffffffda RBX: 00000000019026b0 RCX:
> 00007f61d422e567
> [   76.173649] RDX: 00000000019026b0 RSI: 00000000c400941b RDI:
> 0000000000000003
> [   76.173909] RBP: 0000000000000000 R08: 00007f61d3937700 R09:
> 0000000000000000
> [   76.174244] R10: 00007f61d3937700 R11: 0000000000000246 R12:
> 0000000000000000
> [   76.174566] R13: 0000000000801000 R14: 0000000000000000 R15:
> 00007f61d3937700
> [   76.175217] btrfs (4065) used greatest stack depth: 11424 bytes left
> 
> Signed-off-by: Anand Jain <anand.jain@oracle.com>
> ---
>  fs/btrfs/scrub.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index b1c2d1cdbd4b..3fc31eeef2a7 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3757,10 +3757,13 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
>  
>  static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
>  {
> +	lockdep_assert_held(&fs_info->scrub_lock);
>  	if (--fs_info->scrub_workers_refcnt == 0) {
> +		mutex_unlock(&fs_info->scrub_lock);
>  		btrfs_destroy_workqueue(fs_info->scrub_workers);
>  		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
>  		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
> +		mutex_lock(&fs_info->scrub_lock);
>  	}

This looks ok, however:

1. lockdep_assert_held should also be added to get scrub workers

2. scrub_workers_refcnt must eventually be converted to refcount_t type
and the _refcnt dropped from the name

3. Do we really need 3 workqueues for scrub? Are we really winning
anything by having that many instead of one or just queueing scrub to a
generic one?

>  	WARN_ON(fs_info->scrub_workers_refcnt < 0);
>  }
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread
  2018-11-26  9:47   ` Nikolay Borisov
@ 2018-11-27 13:37     ` Anand Jain
  0 siblings, 0 replies; 10+ messages in thread
From: Anand Jain @ 2018-11-27 13:37 UTC (permalink / raw)
  To: Nikolay Borisov, linux-btrfs



On 11/26/2018 05:47 PM, Nikolay Borisov wrote:
> 
> 
> On 26.11.18 г. 11:07 ч., Anand Jain wrote:
>> The fs_info::device_list_mutex and fs_info::scrub_lock creates a
>> nested locks in btrfs_scrub_dev(). During the lock acquire the
>> hierarchy is fs_info::device_list_mutex and then fs_info::scrub_lock,
>> so following the same reverse order during unlock, that is
>> fs_info::scrub_lock and then fs_info::device_list_mutex.
>>
>> Signed-off-by: Anand Jain <anand.jain@oracle.com>
>> ---
>>   fs/btrfs/scrub.c | 16 +++++++---------
>>   1 file changed, 7 insertions(+), 9 deletions(-)
>>
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index 902819d3cf41..b1c2d1cdbd4b 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -3865,7 +3865,6 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>>   	}
>>   	sctx->readonly = readonly;
>>   	dev->scrub_ctx = sctx;
>> -	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
>>   
>>   	/*
>>   	 * checking @scrub_pause_req here, we can avoid
>> @@ -3875,15 +3874,14 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>>   	atomic_inc(&fs_info->scrubs_running);
>>   	mutex_unlock(&fs_info->scrub_lock);
>>   
>> -	if (!is_dev_replace) {
>> -		/*
>> -		 * by holding device list mutex, we can
>> -		 * kick off writing super in log tree sync.
>> -		 */
>> -		mutex_lock(&fs_info->fs_devices->device_list_mutex);
>> +	/*
>> +	 * by holding device list mutex, we can kick off writing super in log
>> +	 * tree sync.
>> +	 */
>> +	if (!is_dev_replace)
>>   		ret = scrub_supers(sctx, dev);
>> -		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
>> -	}
>> +
>> +	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
> 
> Have you considered whether this change will have any negative impact
> due to the fact that __scrtub_blocked_if_needed can go to sleep for
> arbitrary time with device_list_mutex held now ?

  You are right. I missed that point. The device_list_mutex must not be
  blocked. In fact here we don't need the nested device_list_mutex and
  scrub_lock at all. I have comeup with a new fix [1] below separating
  them.

[1]
---------------------------------
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 902819d3cf41..db895ad23eda 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3830,42 +3830,37 @@ int btrfs_scrub_dev(struct btrfs_fs_info 
*fs_info, u64 devid, u64 start,
                 return -EROFS;
         }

-       mutex_lock(&fs_info->scrub_lock);
         if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
             test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &dev->dev_state)) {
-               mutex_unlock(&fs_info->scrub_lock);
                 mutex_unlock(&fs_info->fs_devices->device_list_mutex);
                 return -EIO;
         }
+       mutex_unlock(&fs_info->fs_devices->device_list_mutex);

         btrfs_dev_replace_read_lock(&fs_info->dev_replace);
         if (dev->scrub_ctx ||
             (!is_dev_replace &&
              btrfs_dev_replace_is_ongoing(&fs_info->dev_replace))) {
                 btrfs_dev_replace_read_unlock(&fs_info->dev_replace);
-               mutex_unlock(&fs_info->scrub_lock);
-               mutex_unlock(&fs_info->fs_devices->device_list_mutex);
                 return -EINPROGRESS;
         }
         btrfs_dev_replace_read_unlock(&fs_info->dev_replace);

+       mutex_lock(&fs_info->scrub_lock);
         ret = scrub_workers_get(fs_info, is_dev_replace);
         if (ret) {
                 mutex_unlock(&fs_info->scrub_lock);
-               mutex_unlock(&fs_info->fs_devices->device_list_mutex);
                 return ret;
         }

         sctx = scrub_setup_ctx(dev, is_dev_replace);
         if (IS_ERR(sctx)) {
                 mutex_unlock(&fs_info->scrub_lock);
-               mutex_unlock(&fs_info->fs_devices->device_list_mutex);
                 scrub_workers_put(fs_info);
                 return PTR_ERR(sctx);
         }
         sctx->readonly = readonly;
         dev->scrub_ctx = sctx;
-       mutex_unlock(&fs_info->fs_devices->device_list_mutex);

         /*
          * checking @scrub_pause_req here, we can avoid
------------------------------------------------

Will send v2.

Thanks, Anand



> 
>>   
>>   	if (!ret)
>>   		ret = scrub_enumerate_chunks(sctx, dev, start, end);
>>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning
  2018-11-26  9:59   ` Nikolay Borisov
@ 2018-11-28  8:47     ` Anand Jain
  2018-11-29 14:31       ` David Sterba
  0 siblings, 1 reply; 10+ messages in thread
From: Anand Jain @ 2018-11-28  8:47 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs



On 11/26/2018 05:59 PM, Nikolay Borisov wrote:
> 
> 
> On 26.11.18 г. 11:07 ч., Anand Jain wrote:
>> Circular locking dependency check reports warning[1], that's because
>> the btrfs_scrub_dev() calls the stack #0 below with, the
>> fs_info::scrub_lock held. The test case leading to this warning..
>>
>>    mkfs.btrfs -fq /dev/sdb && mount /dev/sdb /btrfs
>>    btrfs scrub start -B /btrfs
>>
>> In fact we have fs_info::scrub_workers_refcnt to tack if the init and
>> destroy of the scrub workers are needed. So once we have incremented
>> and decremented the fs_info::scrub_workers_refcnt value in the thread,
>> its ok to drop the scrub_lock, and then actually do the
>> btrfs_destroy_workqueue() part. So this patch drops the scrub_lock
>> before calling btrfs_destroy_workqueue().
>>
>> [1]
>> [   76.146826] ======================================================
>> [   76.147086] WARNING: possible circular locking dependency detected
>> [   76.147316] 4.20.0-rc3+ #41 Not tainted
>> [   76.147489] ------------------------------------------------------
>> [   76.147722] btrfs/4065 is trying to acquire lock:
>> [   76.147984] 0000000038593bc0 ((wq_completion)"%s-%s""btrfs",
>> name){+.+.}, at: flush_workqueue+0x70/0x4d0
>> [   76.148337]
>> but task is already holding lock:
>> [   76.148594] 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
>> btrfs_scrub_dev+0x316/0x5d0 [btrfs]
>> [   76.148909]
>> which lock already depends on the new lock.
>>
>> [   76.149191]
>> the existing dependency chain (in reverse order) is:
>> [   76.149446]
>> -> #3 (&fs_info->scrub_lock){+.+.}:
>> [   76.149707]        btrfs_scrub_dev+0x11f/0x5d0 [btrfs]
>> [   76.149924]        btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
>> [   76.150216]        do_vfs_ioctl+0xa9/0x6d0
>> [   76.150468]        ksys_ioctl+0x60/0x90
>> [   76.150716]        __x64_sys_ioctl+0x16/0x20
>> [   76.150911]        do_syscall_64+0x50/0x180
>> [   76.151182]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> [   76.151469]
>> -> #2 (&fs_devs->device_list_mutex){+.+.}:
>> [   76.151851]        reada_start_machine_worker+0xca/0x3f0 [btrfs]
>> [   76.152195]        normal_work_helper+0xf0/0x4c0 [btrfs]
>> [   76.152489]        process_one_work+0x1f4/0x520
>> [   76.152751]        worker_thread+0x46/0x3d0
>> [   76.153715]        kthread+0xf8/0x130
>> [   76.153912]        ret_from_fork+0x3a/0x50
>> [   76.154178]
>> -> #1 ((work_completion)(&work->normal_work)){+.+.}:
>> [   76.154575]        worker_thread+0x46/0x3d0
>> [   76.154828]        kthread+0xf8/0x130
>> [   76.155108]        ret_from_fork+0x3a/0x50
>> [   76.155357]
>> -> #0 ((wq_completion)"%s-%s""btrfs", name){+.+.}:
>> [   76.155751]        flush_workqueue+0x9a/0x4d0
>> [   76.155911]        drain_workqueue+0xca/0x1a0
>> [   76.156182]        destroy_workqueue+0x17/0x230
>> [   76.156455]        btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
>> [   76.156756]        scrub_workers_put+0x2e/0x60 [btrfs]
>> [   76.156931]        btrfs_scrub_dev+0x329/0x5d0 [btrfs]
>> [   76.157219]        btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
>> [   76.157491]        do_vfs_ioctl+0xa9/0x6d0
>> [   76.157742]        ksys_ioctl+0x60/0x90
>> [   76.157910]        __x64_sys_ioctl+0x16/0x20
>> [   76.158177]        do_syscall_64+0x50/0x180
>> [   76.158429]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> [   76.158716]
>> other info that might help us debug this:
>>
>> [   76.158908] Chain exists of:
>>    (wq_completion)"%s-%s""btrfs", name --> &fs_devs->device_list_mutex
>> --> &fs_info->scrub_lock
>>
>> [   76.159629]  Possible unsafe locking scenario:
>>
>> [   76.160607]        CPU0                    CPU1
>> [   76.160934]        ----                    ----
>> [   76.161210]   lock(&fs_info->scrub_lock);
>> [   76.161458]
>> lock(&fs_devs->device_list_mutex);
>> [   76.161805]
>> lock(&fs_info->scrub_lock);
>> [   76.161909]   lock((wq_completion)"%s-%s""btrfs", name);
>> [   76.162201]
>>   *** DEADLOCK ***
>>
>> [   76.162627] 2 locks held by btrfs/4065:
>> [   76.162897]  #0: 00000000bef2775b (sb_writers#12){.+.+}, at:
>> mnt_want_write_file+0x24/0x50
>> [   76.163335]  #1: 0000000062392ab7 (&fs_info->scrub_lock){+.+.}, at:
>> btrfs_scrub_dev+0x316/0x5d0 [btrfs]
>> [   76.163796]
>> stack backtrace:
>> [   76.163911] CPU: 1 PID: 4065 Comm: btrfs Not tainted 4.20.0-rc3+ #41
>> [   76.164228] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
>> VirtualBox 12/01/2006
>> [   76.164646] Call Trace:
>> [   76.164872]  dump_stack+0x5e/0x8b
>> [   76.165128]  print_circular_bug.isra.37+0x1f1/0x1fe
>> [   76.165398]  __lock_acquire+0x14aa/0x1620
>> [   76.165652]  lock_acquire+0xb0/0x190
>> [   76.165910]  ? flush_workqueue+0x70/0x4d0
>> [   76.166175]  flush_workqueue+0x9a/0x4d0
>> [   76.166420]  ? flush_workqueue+0x70/0x4d0
>> [   76.166671]  ? drain_workqueue+0x52/0x1a0
>> [   76.166911]  drain_workqueue+0xca/0x1a0
>> [   76.167167]  destroy_workqueue+0x17/0x230
>> [   76.167428]  btrfs_destroy_workqueue+0x5d/0x1c0 [btrfs]
>> [   76.167720]  scrub_workers_put+0x2e/0x60 [btrfs]
>> [   76.168233]  btrfs_scrub_dev+0x329/0x5d0 [btrfs]
>> [   76.168504]  ? __sb_start_write+0x121/0x1b0
>> [   76.168759]  ? mnt_want_write_file+0x24/0x50
>> [   76.169654]  btrfs_ioctl+0x1ac3/0x2d80 [btrfs]
>> [   76.169934]  ? find_held_lock+0x2d/0x90
>> [   76.170204]  ? find_held_lock+0x2d/0x90
>> [   76.170450]  do_vfs_ioctl+0xa9/0x6d0
>> [   76.170690]  ? __fget+0x101/0x1f0
>> [   76.170910]  ? __fget+0x5/0x1f0
>> [   76.171157]  ksys_ioctl+0x60/0x90
>> [   76.171391]  __x64_sys_ioctl+0x16/0x20
>> [   76.171634]  do_syscall_64+0x50/0x180
>> [   76.171892]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> [   76.172186] RIP: 0033:0x7f61d422e567
>> [   76.172425] Code: 44 00 00 48 8b 05 29 09 2d 00 64 c7 00 26 00 00 00
>> 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f
>> 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f9 08 2d 00 f7 d8 64 89 01 48
>> [   76.172911] RSP: 002b:00007f61d3936d68 EFLAGS: 00000246 ORIG_RAX:
>> 0000000000000010
>> [   76.173328] RAX: ffffffffffffffda RBX: 00000000019026b0 RCX:
>> 00007f61d422e567
>> [   76.173649] RDX: 00000000019026b0 RSI: 00000000c400941b RDI:
>> 0000000000000003
>> [   76.173909] RBP: 0000000000000000 R08: 00007f61d3937700 R09:
>> 0000000000000000
>> [   76.174244] R10: 00007f61d3937700 R11: 0000000000000246 R12:
>> 0000000000000000
>> [   76.174566] R13: 0000000000801000 R14: 0000000000000000 R15:
>> 00007f61d3937700
>> [   76.175217] btrfs (4065) used greatest stack depth: 11424 bytes left
>>
>> Signed-off-by: Anand Jain <anand.jain@oracle.com>
>> ---
>>   fs/btrfs/scrub.c | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index b1c2d1cdbd4b..3fc31eeef2a7 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -3757,10 +3757,13 @@ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
>>   
>>   static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
>>   {
>> +	lockdep_assert_held(&fs_info->scrub_lock);
>>   	if (--fs_info->scrub_workers_refcnt == 0) {
>> +		mutex_unlock(&fs_info->scrub_lock);
>>   		btrfs_destroy_workqueue(fs_info->scrub_workers);
>>   		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
>>   		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
>> +		mutex_lock(&fs_info->scrub_lock);
>>   	}
> 
> This looks ok, however:

  Thanks for suggesting. 1-2 (below) can go into a new patch.

> 1. lockdep_assert_held should also be added to get scrub workers

  ok. Added in v2 patch set.

> 2. scrub_workers_refcnt must eventually be converted to refcount_t type

  ok. Added in v2 patch set.

> and the _refcnt dropped from the name

  name scrub_workers is already taken in this struct.
  scrub_workers_refcnt is also ok IMO.

> 3. Do we really need 3 workqueues for scrub? Are we really winning
> anything by having that many instead of one or just queueing scrub to a
> generic one?

  yes the additional two workqueues are needed for disgustingly
  separate operations with in scrub, like repair bad block when
  dev replace, raid56 parity verification.

Thanks, Anand


>>   	WARN_ON(fs_info->scrub_workers_refcnt < 0);
>>   }
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning
  2018-11-28  8:47     ` Anand Jain
@ 2018-11-29 14:31       ` David Sterba
  2018-11-30  1:05         ` Anand Jain
  0 siblings, 1 reply; 10+ messages in thread
From: David Sterba @ 2018-11-29 14:31 UTC (permalink / raw)
  To: Anand Jain; +Cc: Nikolay Borisov, linux-btrfs

On Wed, Nov 28, 2018 at 04:47:27PM +0800, Anand Jain wrote:
> > 2. scrub_workers_refcnt must eventually be converted to refcount_t type
> 
>   ok. Added in v2 patch set.

No such thing is in v2 and this would actually get rid of the need to
hold scrub_lock in scrub_workers_put. Which in turn can be moved out of
the locked section in btrfs_scrub_dev and the warning is gone. Problem
solved.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning
  2018-11-29 14:31       ` David Sterba
@ 2018-11-30  1:05         ` Anand Jain
  2018-11-30  5:28           ` Anand Jain
  0 siblings, 1 reply; 10+ messages in thread
From: Anand Jain @ 2018-11-30  1:05 UTC (permalink / raw)
  To: dsterba, Nikolay Borisov, linux-btrfs



On 11/29/2018 10:31 PM, David Sterba wrote:
> On Wed, Nov 28, 2018 at 04:47:27PM +0800, Anand Jain wrote:
>>> 2. scrub_workers_refcnt must eventually be converted to refcount_t type
>>
>>    ok. Added in v2 patch set.
> 
> No such thing is in v2 and this would actually get rid of the need to
> hold scrub_lock in scrub_workers_put. Which in turn can be moved out of
> the locked section in btrfs_scrub_dev and the warning is gone. Problem
> solved.
> 

  Right. When testing btrfs/011 it got hung and bisect pointed to the
  patch which was converting int to refcount_t.
  I had difficulties to get the logs out of the test machines, so I
  had to drop the patch.
  Will send refcount_t patch, patch 1/3 and possibly scrub concurrency
  patch (make scrub independent of the btrfs-progs locks) patches all
  together.

Thanks, Anand

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning
  2018-11-30  1:05         ` Anand Jain
@ 2018-11-30  5:28           ` Anand Jain
  0 siblings, 0 replies; 10+ messages in thread
From: Anand Jain @ 2018-11-30  5:28 UTC (permalink / raw)
  To: dsterba, Nikolay Borisov, linux-btrfs



On 11/30/2018 09:05 AM, Anand Jain wrote:
> 
> 
> On 11/29/2018 10:31 PM, David Sterba wrote:
>> On Wed, Nov 28, 2018 at 04:47:27PM +0800, Anand Jain wrote:
>>>> 2. scrub_workers_refcnt must eventually be converted to refcount_t type
>>>
>>>    ok. Added in v2 patch set.
>>
>> No such thing is in v2 and this would actually get rid of the need to
>> hold scrub_lock in scrub_workers_put. 

  Well we still need scrub_lock, because we spun scrub thread per device,
  and these threads can race as well. So the first scrub thread should
  make sure the scrub workers are allocated and keep the other threads
  waiting.
  As of now btrfs-progs serializes the scrub per device. But btrfs-progs
  is just one of the tool and there can be some other similar tool. We
  need hardening with in the kernel.

Thanks, Anand


>> Which in turn can be moved out of
>> the locked section in btrfs_scrub_dev and the warning is gone. Problem
>> solved.



> 
>   Right. When testing btrfs/011 it got hung and bisect pointed to the
>   patch which was converting int to refcount_t.
>   I had difficulties to get the logs out of the test machines, so I
>   had to drop the patch.
>   Will send refcount_t patch, patch 1/3 and possibly scrub concurrency
>   patch (make scrub independent of the btrfs-progs locks) patches all
>   together.
> 
> Thanks, Anand

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-11-30  5:29 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-26  9:07 [PATCH 0/2] btrfs: scrub: fix scrub_lock Anand Jain
2018-11-26  9:07 ` [PATCH 1/2] btrfs: scrub: maintain the unlock order in scrub thread Anand Jain
2018-11-26  9:47   ` Nikolay Borisov
2018-11-27 13:37     ` Anand Jain
2018-11-26  9:07 ` [PATCH 2/2] btrfs: scrub: fix circular locking dependency warning Anand Jain
2018-11-26  9:59   ` Nikolay Borisov
2018-11-28  8:47     ` Anand Jain
2018-11-29 14:31       ` David Sterba
2018-11-30  1:05         ` Anand Jain
2018-11-30  5:28           ` Anand Jain

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.