linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013
@ 2021-09-28 11:20 Yi Zhang
  2021-09-29 11:06 ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Yi Zhang @ 2021-09-28 11:20 UTC (permalink / raw)
  To: linux-nvme

Hello

Bellow warning was triggered with blktests nvme-tcp on latest
5.15.0-rc3, pls help check it.

[ 2153.718873] run blktests nvme/013 at 2021-09-28 07:14:57
[ 2154.082894] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 2154.116627] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 2154.174352] nvmet: creating controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:c83e4244a1d04d318ced2447da679f67.
[ 2154.196889] nvme nvme0: creating 40 I/O queues.
[ 2154.222389] nvme nvme0: mapped 40/0/0 default/read/poll queues.
[ 2154.275779] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420
[ 2155.648483] XFS (nvme0n1): Mounting V5 Filesystem
[ 2155.717630] XFS (nvme0n1): Ending clean mount
[ 2155.734834] xfs filesystem being mounted at /mnt/blktests supports
timestamps until 2038 (0x7fffffff)

[ 2155.932675] ======================================================
[ 2155.939575] WARNING: possible circular locking dependency detected
[ 2155.946467] 5.15.0-rc3 #1 Not tainted
[ 2155.950551] ------------------------------------------------------
[ 2155.957447] fio/1496 is trying to acquire lock:
[ 2155.962504] ffff93d58ffe2470 (sk_lock-AF_INET){+.+.}-{0:0}, at:
tcp_sendpage+0x23/0x80
[ 2155.971355]
               but task is already holding lock:
[ 2155.977866] ffff93d59bb7f8f0 (&xfs_dir_ilock_class/5){+.+.}-{3:3},
at: xfs_ilock+0xcf/0x290 [xfs]
[ 2155.987885]
               which lock already depends on the new lock.

[ 2155.997012]
               the existing dependency chain (in reverse order) is:
[ 2156.005362]
               -> #3 (&xfs_dir_ilock_class/5){+.+.}-{3:3}:
[ 2156.012843]        down_write_nested+0x49/0x120
[ 2156.017904]        xfs_ilock+0xcf/0x290 [xfs]
[ 2156.022850]        xfs_create+0x1e8/0x660 [xfs]
[ 2156.027989]        xfs_generic_create+0xfa/0x340 [xfs]
[ 2156.033805]        vfs_mkdir+0x12b/0x1e0
[ 2156.038194]        do_mkdirat+0x10a/0x130
[ 2156.042669]        __x64_sys_mkdir+0x48/0x70
[ 2156.047434]        do_syscall_64+0x3b/0x90
[ 2156.052013]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 2156.058238]
               -> #2 (sb_internal){.+.+}-{0:0}:
[ 2156.064654]        xfs_trans_alloc+0x1a1/0x3c0 [xfs]
[ 2156.070281]        xfs_vn_update_time+0xc6/0x260 [xfs]
[ 2156.076097]        touch_atime+0x129/0x230
[ 2156.080672]        xfs_file_mmap+0x84/0x90 [xfs]
[ 2156.085906]        mmap_region+0x497/0x6b0
[ 2156.090480]        do_mmap+0x362/0x560
[ 2156.094663]        vm_mmap_pgoff+0xb8/0x150
[ 2156.099330]        ksys_mmap_pgoff+0x198/0x1f0
[ 2156.104288]        do_syscall_64+0x3b/0x90
[ 2156.108860]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 2156.115078]
               -> #1 (&mm->mmap_lock#2){++++}-{3:3}:
[ 2156.121978]        __might_fault+0x56/0x80
[ 2156.126553]        _copy_from_user+0x20/0xa0
[ 2156.131327]        ip_setsockopt+0x123d/0x15c0
[ 2156.136287]        __sys_setsockopt+0xba/0x1a0
[ 2156.141250]        __x64_sys_setsockopt+0x1b/0x20
[ 2156.146499]        do_syscall_64+0x3b/0x90
[ 2156.151069]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 2156.157289]
               -> #0 (sk_lock-AF_INET){+.+.}-{0:0}:
[ 2156.164093]        __lock_acquire+0x11fe/0x1e00
[ 2156.169154]        lock_acquire+0xc4/0x2e0
[ 2156.173727]        lock_sock_nested+0x2e/0x80
[ 2156.178593]        tcp_sendpage+0x23/0x80
[ 2156.183068]        inet_sendpage+0x4f/0x80
[ 2156.187645]        kernel_sendpage+0x62/0xf0
[ 2156.192413]        nvme_tcp_try_send+0x128/0x810 [nvme_tcp]
[ 2156.198633]        nvme_tcp_queue_rq+0x35a/0x380 [nvme_tcp]
[ 2156.204853]        __blk_mq_try_issue_directly+0x120/0x1c0
[ 2156.210983]        blk_mq_try_issue_list_directly+0x157/0x250
[ 2156.217400]        blk_mq_sched_insert_requests+0x241/0x2a0
[ 2156.223620]        blk_mq_flush_plug_list+0x114/0x210
[ 2156.229256]        blk_finish_plug+0x21/0x30
[ 2156.234028]        _xfs_buf_ioapply+0x30a/0x3c0 [xfs]
[ 2156.239747]        __xfs_buf_submit+0x92/0x260 [xfs]
[ 2156.245360]        xfs_buf_read_map+0xe9/0x2a0 [xfs]
[ 2156.250969]        xfs_trans_read_buf_map+0x1bb/0x550 [xfs]
[ 2156.257274]        xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
[ 2156.264732]        xfs_btree_lookup_get_block+0x87/0x160 [xfs]
[ 2156.271316]        xfs_btree_lookup+0x123/0x4d0 [xfs]
[ 2156.277027]        xfs_dialloc_ag_update_inobt+0x3a/0x180 [xfs]
[ 2156.283709]        xfs_dialloc+0x43f/0x800 [xfs]
[ 2156.288938]        xfs_create+0x404/0x660 [xfs]
[ 2156.294075]        xfs_generic_create+0xfa/0x340 [xfs]
[ 2156.299891]        lookup_open.isra.0+0x537/0x670
[ 2156.305141]        path_openat+0x2aa/0xa20
[ 2156.309713]        do_filp_open+0x9f/0x130
[ 2156.314290]        do_sys_openat2+0x7a/0x140
[ 2156.319057]        __x64_sys_openat+0x45/0x70
[ 2156.323918]        do_syscall_64+0x3b/0x90
[ 2156.328491]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 2156.334712]
               other info that might help us debug this:

[ 2156.343644] Chain exists of:
                 sk_lock-AF_INET --> sb_internal --> &xfs_dir_ilock_class/5

[ 2156.355972]  Possible unsafe locking scenario:

[ 2156.362579]        CPU0                    CPU1
[ 2156.367632]        ----                    ----
[ 2156.372685]   lock(&xfs_dir_ilock_class/5);
[ 2156.377351]                                lock(sb_internal);
[ 2156.383764]                                lock(&xfs_dir_ilock_class/5);
[ 2156.391242]   lock(sk_lock-AF_INET);
[ 2156.395230]
                *** DEADLOCK ***

[ 2156.401834] 6 locks held by fio/1496:
[ 2156.405919]  #0: ffff93d584c36498 (sb_writers#13){.+.+}-{0:0}, at:
path_openat+0x9fc/0xa20
[ 2156.415147]  #1: ffff93d59bb7fb30
(&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at:
path_openat+0x296/0xa20
[ 2156.426693]  #2: ffff93d584c366b8 (sb_internal){.+.+}-{0:0}, at:
xfs_trans_alloc_icreate+0x41/0xd0 [xfs]
[ 2156.437363]  #3: ffff93d59bb7f8f0
(&xfs_dir_ilock_class/5){+.+.}-{3:3}, at: xfs_ilock+0xcf/0x290 [xfs]
[ 2156.447838]  #4: ffff93d5d75f4c60 (hctx->srcu){....}-{0:0}, at:
hctx_lock+0x51/0xd0
[ 2156.456386]  #5: ffff93d59d7bc3f0 (&queue->send_mutex){+.+.}-{3:3},
at: nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]
[ 2156.467748]
               stack backtrace:
[ 2156.472612] CPU: 27 PID: 1496 Comm: fio Not tainted 5.15.0-rc3 #1
[ 2156.479410] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS
RMLSDP.86I.R3.27.D685.1305151734 05/15/2013
[ 2156.490578] Call Trace:
[ 2156.493312]  dump_stack_lvl+0x57/0x72
[ 2156.497404]  check_noncircular+0xdf/0x100
[ 2156.501879]  ? usage_match+0x20/0x20
[ 2156.505867]  ? __bfs+0xf5/0x230
[ 2156.509373]  __lock_acquire+0x11fe/0x1e00
[ 2156.513850]  lock_acquire+0xc4/0x2e0
[ 2156.517829]  ? tcp_sendpage+0x23/0x80
[ 2156.521917]  lock_sock_nested+0x2e/0x80
[ 2156.526199]  ? tcp_sendpage+0x23/0x80
[ 2156.530285]  tcp_sendpage+0x23/0x80
[ 2156.534179]  inet_sendpage+0x4f/0x80
[ 2156.538170]  kernel_sendpage+0x62/0xf0
[ 2156.542355]  nvme_tcp_try_send+0x128/0x810 [nvme_tcp]
[ 2156.547994]  ? mutex_trylock+0x13f/0x160
[ 2156.552374]  ? nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]
[ 2156.558210]  nvme_tcp_queue_rq+0x35a/0x380 [nvme_tcp]
[ 2156.563850]  __blk_mq_try_issue_directly+0x120/0x1c0
[ 2156.569395]  blk_mq_try_issue_list_directly+0x157/0x250
[ 2156.575230]  blk_mq_sched_insert_requests+0x241/0x2a0
[ 2156.580869]  blk_mq_flush_plug_list+0x114/0x210
[ 2156.585926]  blk_finish_plug+0x21/0x30
[ 2156.590110]  _xfs_buf_ioapply+0x30a/0x3c0 [xfs]
[ 2156.595252]  __xfs_buf_submit+0x92/0x260 [xfs]
[ 2156.600295]  xfs_buf_read_map+0xe9/0x2a0 [xfs]
[ 2156.605335]  xfs_trans_read_buf_map+0x1bb/0x550 [xfs]
[ 2156.611060]  ? xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
[ 2156.618129]  xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
[ 2156.625006]  xfs_btree_lookup_get_block+0x87/0x160 [xfs]
[ 2156.631008]  xfs_btree_lookup+0x123/0x4d0 [xfs]
[ 2156.636137]  ? rcu_read_lock_sched_held+0x3f/0x80
[ 2156.641395]  ? kmem_cache_alloc+0x28a/0x340
[ 2156.646070]  xfs_dialloc_ag_update_inobt+0x3a/0x180 [xfs]
[ 2156.652172]  xfs_dialloc+0x43f/0x800 [xfs]
[ 2156.656822]  ? lock_acquired+0xbc/0x380
[ 2156.661104]  xfs_create+0x404/0x660 [xfs]
[ 2156.665663]  xfs_generic_create+0xfa/0x340 [xfs]
[ 2156.670900]  lookup_open.isra.0+0x537/0x670
[ 2156.675573]  path_openat+0x2aa/0xa20
[ 2156.679564]  do_filp_open+0x9f/0x130
[ 2156.683554]  ? lock_release+0x28a/0x400
[ 2156.687840]  ? _raw_spin_unlock+0x1f/0x30
[ 2156.692314]  ? alloc_fd+0x12f/0x1f0
[ 2156.696210]  do_sys_openat2+0x7a/0x140
[ 2156.700394]  __x64_sys_openat+0x45/0x70
[ 2156.704676]  do_syscall_64+0x3b/0x90
[ 2156.708658]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 2156.714297] RIP: 0033:0x7fcb40bc7584
[ 2156.718290] Code: 24 20 eb 8f 66 90 44 89 54 24 0c e8 56 8b f8 ff
44 8b 54 24 0c 44 89 e2 48 89 ee 41 89 c0 bf 9c ff ff ff b8 01 01 00
00 0f 05 <48> 3d 00 f0 ff ff 77 34 44 89 c7 89 44 24 0c e8 98 8b f8 ff
8b 44
[ 2156.739247] RSP: 002b:00007ffda2d2e600 EFLAGS: 00000293 ORIG_RAX:
0000000000000101
[ 2156.747698] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fcb40bc7584
[ 2156.755660] RDX: 0000000000000041 RSI: 00007fcb3fad72b0 RDI: 00000000ffffff9c
[ 2156.763626] RBP: 00007fcb3fad72b0 R08: 0000000000000000 R09: 0000000000000000
[ 2156.771591] R10: 00000000000001a4 R11: 0000000000000293 R12: 0000000000000041
[ 2156.779553] R13: 0000000000000000 R14: 00007fcb3fad7110 R15: 000000003b600000

-- 
Best Regards,
  Yi Zhang


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013
  2021-09-28 11:20 [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013 Yi Zhang
@ 2021-09-29 11:06 ` Sagi Grimberg
  2022-02-11  5:38   ` Yi Zhang
  2022-02-15  4:21   ` Chris Leech
  0 siblings, 2 replies; 6+ messages in thread
From: Sagi Grimberg @ 2021-09-29 11:06 UTC (permalink / raw)
  To: Yi Zhang, linux-nvme


> Hello
> 
> Bellow warning was triggered with blktests nvme-tcp on latest
> 5.15.0-rc3, pls help check it.

Hey Yi, thanks for reporting..

I'm trying to understand the circular locking here. I'm trying to
understand if this is due to having an allocation in the
network send path that may cause a circular dependency.

> 
> [ 2153.718873] run blktests nvme/013 at 2021-09-28 07:14:57
> [ 2154.082894] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> [ 2154.116627] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> [ 2154.174352] nvmet: creating controller 1 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:c83e4244a1d04d318ced2447da679f67.
> [ 2154.196889] nvme nvme0: creating 40 I/O queues.
> [ 2154.222389] nvme nvme0: mapped 40/0/0 default/read/poll queues.
> [ 2154.275779] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [ 2155.648483] XFS (nvme0n1): Mounting V5 Filesystem
> [ 2155.717630] XFS (nvme0n1): Ending clean mount
> [ 2155.734834] xfs filesystem being mounted at /mnt/blktests supports
> timestamps until 2038 (0x7fffffff)
> 
> [ 2155.932675] ======================================================
> [ 2155.939575] WARNING: possible circular locking dependency detected
> [ 2155.946467] 5.15.0-rc3 #1 Not tainted
> [ 2155.950551] ------------------------------------------------------
> [ 2155.957447] fio/1496 is trying to acquire lock:
> [ 2155.962504] ffff93d58ffe2470 (sk_lock-AF_INET){+.+.}-{0:0}, at:
> tcp_sendpage+0x23/0x80
> [ 2155.971355]
>                 but task is already holding lock:
> [ 2155.977866] ffff93d59bb7f8f0 (&xfs_dir_ilock_class/5){+.+.}-{3:3},
> at: xfs_ilock+0xcf/0x290 [xfs]
> [ 2155.987885]
>                 which lock already depends on the new lock.
> 
> [ 2155.997012]
>                 the existing dependency chain (in reverse order) is:
> [ 2156.005362]
>                 -> #3 (&xfs_dir_ilock_class/5){+.+.}-{3:3}:
> [ 2156.012843]        down_write_nested+0x49/0x120
> [ 2156.017904]        xfs_ilock+0xcf/0x290 [xfs]
> [ 2156.022850]        xfs_create+0x1e8/0x660 [xfs]
> [ 2156.027989]        xfs_generic_create+0xfa/0x340 [xfs]
> [ 2156.033805]        vfs_mkdir+0x12b/0x1e0
> [ 2156.038194]        do_mkdirat+0x10a/0x130
> [ 2156.042669]        __x64_sys_mkdir+0x48/0x70
> [ 2156.047434]        do_syscall_64+0x3b/0x90
> [ 2156.052013]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 2156.058238]
>                 -> #2 (sb_internal){.+.+}-{0:0}:
> [ 2156.064654]        xfs_trans_alloc+0x1a1/0x3c0 [xfs]
> [ 2156.070281]        xfs_vn_update_time+0xc6/0x260 [xfs]
> [ 2156.076097]        touch_atime+0x129/0x230
> [ 2156.080672]        xfs_file_mmap+0x84/0x90 [xfs]
> [ 2156.085906]        mmap_region+0x497/0x6b0
> [ 2156.090480]        do_mmap+0x362/0x560
> [ 2156.094663]        vm_mmap_pgoff+0xb8/0x150
> [ 2156.099330]        ksys_mmap_pgoff+0x198/0x1f0
> [ 2156.104288]        do_syscall_64+0x3b/0x90
> [ 2156.108860]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 2156.115078]
>                 -> #1 (&mm->mmap_lock#2){++++}-{3:3}:
> [ 2156.121978]        __might_fault+0x56/0x80
> [ 2156.126553]        _copy_from_user+0x20/0xa0
> [ 2156.131327]        ip_setsockopt+0x123d/0x15c0
> [ 2156.136287]        __sys_setsockopt+0xba/0x1a0
> [ 2156.141250]        __x64_sys_setsockopt+0x1b/0x20
> [ 2156.146499]        do_syscall_64+0x3b/0x90
> [ 2156.151069]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 2156.157289]
>                 -> #0 (sk_lock-AF_INET){+.+.}-{0:0}:
> [ 2156.164093]        __lock_acquire+0x11fe/0x1e00
> [ 2156.169154]        lock_acquire+0xc4/0x2e0
> [ 2156.173727]        lock_sock_nested+0x2e/0x80
> [ 2156.178593]        tcp_sendpage+0x23/0x80
> [ 2156.183068]        inet_sendpage+0x4f/0x80
> [ 2156.187645]        kernel_sendpage+0x62/0xf0
> [ 2156.192413]        nvme_tcp_try_send+0x128/0x810 [nvme_tcp]
> [ 2156.198633]        nvme_tcp_queue_rq+0x35a/0x380 [nvme_tcp]
> [ 2156.204853]        __blk_mq_try_issue_directly+0x120/0x1c0
> [ 2156.210983]        blk_mq_try_issue_list_directly+0x157/0x250
> [ 2156.217400]        blk_mq_sched_insert_requests+0x241/0x2a0
> [ 2156.223620]        blk_mq_flush_plug_list+0x114/0x210
> [ 2156.229256]        blk_finish_plug+0x21/0x30
> [ 2156.234028]        _xfs_buf_ioapply+0x30a/0x3c0 [xfs]
> [ 2156.239747]        __xfs_buf_submit+0x92/0x260 [xfs]
> [ 2156.245360]        xfs_buf_read_map+0xe9/0x2a0 [xfs]
> [ 2156.250969]        xfs_trans_read_buf_map+0x1bb/0x550 [xfs]
> [ 2156.257274]        xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
> [ 2156.264732]        xfs_btree_lookup_get_block+0x87/0x160 [xfs]
> [ 2156.271316]        xfs_btree_lookup+0x123/0x4d0 [xfs]
> [ 2156.277027]        xfs_dialloc_ag_update_inobt+0x3a/0x180 [xfs]
> [ 2156.283709]        xfs_dialloc+0x43f/0x800 [xfs]
> [ 2156.288938]        xfs_create+0x404/0x660 [xfs]
> [ 2156.294075]        xfs_generic_create+0xfa/0x340 [xfs]
> [ 2156.299891]        lookup_open.isra.0+0x537/0x670
> [ 2156.305141]        path_openat+0x2aa/0xa20
> [ 2156.309713]        do_filp_open+0x9f/0x130
> [ 2156.314290]        do_sys_openat2+0x7a/0x140
> [ 2156.319057]        __x64_sys_openat+0x45/0x70
> [ 2156.323918]        do_syscall_64+0x3b/0x90
> [ 2156.328491]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 2156.334712]
>                 other info that might help us debug this:
> 
> [ 2156.343644] Chain exists of:
>                   sk_lock-AF_INET --> sb_internal --> &xfs_dir_ilock_class/5
> 
> [ 2156.355972]  Possible unsafe locking scenario:
> 
> [ 2156.362579]        CPU0                    CPU1
> [ 2156.367632]        ----                    ----
> [ 2156.372685]   lock(&xfs_dir_ilock_class/5);
> [ 2156.377351]                                lock(sb_internal);
> [ 2156.383764]                                lock(&xfs_dir_ilock_class/5);
> [ 2156.391242]   lock(sk_lock-AF_INET);
> [ 2156.395230]
>                  *** DEADLOCK ***
> 
> [ 2156.401834] 6 locks held by fio/1496:
> [ 2156.405919]  #0: ffff93d584c36498 (sb_writers#13){.+.+}-{0:0}, at:
> path_openat+0x9fc/0xa20
> [ 2156.415147]  #1: ffff93d59bb7fb30
> (&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at:
> path_openat+0x296/0xa20
> [ 2156.426693]  #2: ffff93d584c366b8 (sb_internal){.+.+}-{0:0}, at:
> xfs_trans_alloc_icreate+0x41/0xd0 [xfs]
> [ 2156.437363]  #3: ffff93d59bb7f8f0
> (&xfs_dir_ilock_class/5){+.+.}-{3:3}, at: xfs_ilock+0xcf/0x290 [xfs]
> [ 2156.447838]  #4: ffff93d5d75f4c60 (hctx->srcu){....}-{0:0}, at:
> hctx_lock+0x51/0xd0
> [ 2156.456386]  #5: ffff93d59d7bc3f0 (&queue->send_mutex){+.+.}-{3:3},
> at: nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]
> [ 2156.467748]
>                 stack backtrace:
> [ 2156.472612] CPU: 27 PID: 1496 Comm: fio Not tainted 5.15.0-rc3 #1
> [ 2156.479410] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS
> RMLSDP.86I.R3.27.D685.1305151734 05/15/2013
> [ 2156.490578] Call Trace:
> [ 2156.493312]  dump_stack_lvl+0x57/0x72
> [ 2156.497404]  check_noncircular+0xdf/0x100
> [ 2156.501879]  ? usage_match+0x20/0x20
> [ 2156.505867]  ? __bfs+0xf5/0x230
> [ 2156.509373]  __lock_acquire+0x11fe/0x1e00
> [ 2156.513850]  lock_acquire+0xc4/0x2e0
> [ 2156.517829]  ? tcp_sendpage+0x23/0x80
> [ 2156.521917]  lock_sock_nested+0x2e/0x80
> [ 2156.526199]  ? tcp_sendpage+0x23/0x80
> [ 2156.530285]  tcp_sendpage+0x23/0x80
> [ 2156.534179]  inet_sendpage+0x4f/0x80
> [ 2156.538170]  kernel_sendpage+0x62/0xf0
> [ 2156.542355]  nvme_tcp_try_send+0x128/0x810 [nvme_tcp]
> [ 2156.547994]  ? mutex_trylock+0x13f/0x160
> [ 2156.552374]  ? nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]
> [ 2156.558210]  nvme_tcp_queue_rq+0x35a/0x380 [nvme_tcp]
> [ 2156.563850]  __blk_mq_try_issue_directly+0x120/0x1c0
> [ 2156.569395]  blk_mq_try_issue_list_directly+0x157/0x250
> [ 2156.575230]  blk_mq_sched_insert_requests+0x241/0x2a0
> [ 2156.580869]  blk_mq_flush_plug_list+0x114/0x210
> [ 2156.585926]  blk_finish_plug+0x21/0x30
> [ 2156.590110]  _xfs_buf_ioapply+0x30a/0x3c0 [xfs]
> [ 2156.595252]  __xfs_buf_submit+0x92/0x260 [xfs]
> [ 2156.600295]  xfs_buf_read_map+0xe9/0x2a0 [xfs]
> [ 2156.605335]  xfs_trans_read_buf_map+0x1bb/0x550 [xfs]
> [ 2156.611060]  ? xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
> [ 2156.618129]  xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
> [ 2156.625006]  xfs_btree_lookup_get_block+0x87/0x160 [xfs]
> [ 2156.631008]  xfs_btree_lookup+0x123/0x4d0 [xfs]
> [ 2156.636137]  ? rcu_read_lock_sched_held+0x3f/0x80
> [ 2156.641395]  ? kmem_cache_alloc+0x28a/0x340
> [ 2156.646070]  xfs_dialloc_ag_update_inobt+0x3a/0x180 [xfs]
> [ 2156.652172]  xfs_dialloc+0x43f/0x800 [xfs]
> [ 2156.656822]  ? lock_acquired+0xbc/0x380
> [ 2156.661104]  xfs_create+0x404/0x660 [xfs]
> [ 2156.665663]  xfs_generic_create+0xfa/0x340 [xfs]
> [ 2156.670900]  lookup_open.isra.0+0x537/0x670
> [ 2156.675573]  path_openat+0x2aa/0xa20
> [ 2156.679564]  do_filp_open+0x9f/0x130
> [ 2156.683554]  ? lock_release+0x28a/0x400
> [ 2156.687840]  ? _raw_spin_unlock+0x1f/0x30
> [ 2156.692314]  ? alloc_fd+0x12f/0x1f0
> [ 2156.696210]  do_sys_openat2+0x7a/0x140
> [ 2156.700394]  __x64_sys_openat+0x45/0x70
> [ 2156.704676]  do_syscall_64+0x3b/0x90
> [ 2156.708658]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 2156.714297] RIP: 0033:0x7fcb40bc7584
> [ 2156.718290] Code: 24 20 eb 8f 66 90 44 89 54 24 0c e8 56 8b f8 ff
> 44 8b 54 24 0c 44 89 e2 48 89 ee 41 89 c0 bf 9c ff ff ff b8 01 01 00
> 00 0f 05 <48> 3d 00 f0 ff ff 77 34 44 89 c7 89 44 24 0c e8 98 8b f8 ff
> 8b 44
> [ 2156.739247] RSP: 002b:00007ffda2d2e600 EFLAGS: 00000293 ORIG_RAX:
> 0000000000000101
> [ 2156.747698] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fcb40bc7584
> [ 2156.755660] RDX: 0000000000000041 RSI: 00007fcb3fad72b0 RDI: 00000000ffffff9c
> [ 2156.763626] RBP: 00007fcb3fad72b0 R08: 0000000000000000 R09: 0000000000000000
> [ 2156.771591] R10: 00000000000001a4 R11: 0000000000000293 R12: 0000000000000041
> [ 2156.779553] R13: 0000000000000000 R14: 00007fcb3fad7110 R15: 000000003b600000
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013
  2021-09-29 11:06 ` Sagi Grimberg
@ 2022-02-11  5:38   ` Yi Zhang
  2022-02-15  4:21   ` Chris Leech
  1 sibling, 0 replies; 6+ messages in thread
From: Yi Zhang @ 2022-02-11  5:38 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: open list:NVM EXPRESS DRIVER

On Wed, Sep 29, 2021 at 7:06 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> > Hello
> >
> > Bellow warning was triggered with blktests nvme-tcp on latest
> > 5.15.0-rc3, pls help check it.
>
> Hey Yi, thanks for reporting..
>
> I'm trying to understand the circular locking here. I'm trying to
> understand if this is due to having an allocation in the
> network send path that may cause a circular dependency.
>
> >

Hi Sagi
This issue is still reproducible on the latest 5.17.0-rc3, fee free to
let me know if you need any testing for it. :)

[ 5960.130892] run blktests nvme/013 at 2022-02-11 00:31:51
[ 5960.456011] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 5960.483756] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 5960.511578] nvmet: creating nvm controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-4b10-8044-b9c04f463333.
[ 5960.515933] nvme nvme0: creating 32 I/O queues.
[ 5960.536234] nvme nvme0: mapped 32/0/0 default/read/poll queues.
[ 5960.551779] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420
[ 5961.788277] XFS (nvme0n1): Mounting V5 Filesystem
[ 5961.820057] XFS (nvme0n1): Ending clean mount
[ 5961.826643] xfs filesystem being mounted at /mnt/blktests supports
timestamps until 2038 (0x7fffffff)

[ 5966.115175] ======================================================
[ 5966.121352] WARNING: possible circular locking dependency detected
[ 5966.127533] 5.17.0-rc3 #2 Tainted: G S        I
[ 5966.132585] ------------------------------------------------------
[ 5966.138763] fio/5034 is trying to acquire lock:
[ 5966.143295] ffff8882ad9158c0 (sk_lock-AF_INET){+.+.}-{0:0}, at:
tcp_sendpage+0x23/0x50
[ 5966.151218]
               but task is already holding lock:
[ 5966.157050] ffff8881d51246b0 (&xfs_dir_ilock_class/5){+.+.}-{3:3},
at: xfs_ilock+0x166/0x4a0 [xfs]
[ 5966.166149]
               which lock already depends on the new lock.

[ 5966.174321]
               the existing dependency chain (in reverse order) is:
[ 5966.181801]
               -> #3 (&xfs_dir_ilock_class/5){+.+.}-{3:3}:
[ 5966.188509]        lock_acquire+0x1d2/0x5a0
[ 5966.192695]        down_write_nested+0xa2/0x3b0
[ 5966.197227]        xfs_ilock+0x166/0x4a0 [xfs]
[ 5966.201793]        xfs_create+0x64b/0x12e0 [xfs]
[ 5966.206524]        xfs_generic_create+0x25b/0x5a0 [xfs]
[ 5966.211863]        lookup_open+0xea8/0x1850
[ 5966.216049]        path_openat+0x705/0x2370
[ 5966.220235]        do_filp_open+0x1a4/0x280
[ 5966.224421]        do_sys_openat2+0x30d/0x670
[ 5966.228779]        do_sys_open+0x8a/0xd0
[ 5966.232706]        do_syscall_64+0x37/0x80
[ 5966.236805]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 5966.242379]
               -> #2 (sb_internal){.+.+}-{0:0}:
[ 5966.248125]        lock_acquire+0x1d2/0x5a0
[ 5966.252309]        xfs_trans_alloc+0x3b1/0x980 [xfs]
[ 5966.257388]        xfs_vn_update_time+0x201/0x610 [xfs]
[ 5966.262728]        touch_atime+0x2fa/0x490
[ 5966.266824]        xfs_file_mmap+0x244/0x2d0 [xfs]
[ 5966.271732]        mmap_region+0x87f/0x11c0
[ 5966.275917]        do_mmap+0x58c/0xc00
[ 5966.279670]        vm_mmap_pgoff+0x178/0x220
[ 5966.283942]        ksys_mmap_pgoff+0x2f3/0x470
[ 5966.288387]        do_syscall_64+0x37/0x80
[ 5966.292488]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 5966.298060]
               -> #1 (&mm->mmap_lock#2){++++}-{3:3}:
[ 5966.304246]        lock_acquire+0x1d2/0x5a0
[ 5966.308433]        __might_fault+0xe2/0x160
[ 5966.312617]        _copy_from_user+0x20/0xd0
[ 5966.316893]        copy_bpf_fprog_from_user+0x16f/0x1f0
[ 5966.322118]        sock_setsockopt+0x663/0x25b0
[ 5966.326649]        __sys_setsockopt+0x39e/0x4e0
[ 5966.331183]        __x64_sys_setsockopt+0xba/0x150
[ 5966.335974]        do_syscall_64+0x37/0x80
[ 5966.340074]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 5966.345647]
               -> #0 (sk_lock-AF_INET){+.+.}-{0:0}:
[ 5966.351738]        check_prevs_add+0x3fd/0x2480
[ 5966.356271]        __lock_acquire+0x2402/0x2fa0
[ 5966.360803]        lock_acquire+0x1d2/0x5a0
[ 5966.364990]        lock_sock_nested+0x37/0xd0
[ 5966.369350]        tcp_sendpage+0x23/0x50
[ 5966.373362]        inet_sendpage+0xb3/0x130
[ 5966.377547]        kernel_sendpage+0x1c0/0x2f0
[ 5966.381992]        nvme_tcp_try_send+0x552/0x2250 [nvme_tcp]
[ 5966.387651]        nvme_tcp_queue_rq+0xf3b/0x17b0 [nvme_tcp]
[ 5966.393312]        __blk_mq_try_issue_directly+0x448/0x660
[ 5966.398797]        blk_mq_plug_issue_direct.constprop.75+0x13a/0x210
[ 5966.405149]        blk_mq_flush_plug_list+0x828/0xfd0
[ 5966.410201]        blk_flush_plug+0x2b8/0x4c0
[ 5966.414561]        blk_finish_plug+0x47/0x90
[ 5966.418834]        _xfs_buf_ioapply+0x6ed/0xb70 [xfs]
[ 5966.423998]        __xfs_buf_submit+0x228/0x650 [xfs]
[ 5966.429155]        xfs_buf_read_map+0x365/0x970 [xfs]
[ 5966.434320]        xfs_trans_read_buf_map+0x694/0x11b0 [xfs]
[ 5966.440101]        xfs_btree_read_buf_block.constprop.28+0x1dc/0x340 [xfs]
[ 5966.447076]        xfs_btree_lookup_get_block+0x1a6/0x630 [xfs]
[ 5966.453092]        xfs_btree_lookup+0x39f/0xf20 [xfs]
[ 5966.458240]        xfs_dialloc_ag_update_inobt+0x19d/0x4e0 [xfs]
[ 5966.464350]        xfs_dialloc+0x8a7/0x11c0 [xfs]
[ 5966.469161]        xfs_create+0xb85/0x12e0 [xfs]
[ 5966.473891]        xfs_generic_create+0x25b/0x5a0 [xfs]
[ 5966.479228]        lookup_open+0xea8/0x1850
[ 5966.483416]        path_openat+0x705/0x2370
[ 5966.487601]        do_filp_open+0x1a4/0x280
[ 5966.491787]        do_sys_openat2+0x30d/0x670
[ 5966.496145]        do_sys_open+0x8a/0xd0
[ 5966.500072]        do_syscall_64+0x37/0x80
[ 5966.504172]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 5966.509742]
               other info that might help us debug this:

[ 5966.517744] Chain exists of:
                 sk_lock-AF_INET --> sb_internal --> &xfs_dir_ilock_class/5

[ 5966.528784]  Possible unsafe locking scenario:

[ 5966.534703]        CPU0                    CPU1
[ 5966.539235]        ----                    ----
[ 5966.543768]   lock(&xfs_dir_ilock_class/5);
[ 5966.547954]                                lock(sb_internal);
[ 5966.553701]                                lock(&xfs_dir_ilock_class/5);
[ 5966.560397]   lock(sk_lock-AF_INET);
[ 5966.563978]
                *** DEADLOCK ***

[ 5966.569897] 6 locks held by fio/5034:
[ 5966.573562]  #0: ffff88829c044470 (sb_writers#14){.+.+}-{0:0}, at:
path_openat+0x2182/0x2370
[ 5966.582004]  #1: ffff8881d51248f0
(&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at:
path_openat+0x6f2/0x2370
[ 5966.592439]  #2: ffff88829c044690 (sb_internal){.+.+}-{0:0}, at:
xfs_trans_alloc_icreate+0xa4/0x180 [xfs]
[ 5966.602118]  #3: ffff8881d51246b0
(&xfs_dir_ilock_class/5){+.+.}-{3:3}, at: xfs_ilock+0x166/0x4a0 [xfs]
[ 5966.611626]  #4: ffff88829c0154d0 (q->srcu){....}-{0:0}, at:
blk_mq_flush_plug_list+0x7d2/0xfd0
[ 5966.620327]  #5: ffff888e2e7dbcd0 (&queue->send_mutex){+.+.}-{3:3},
at: nvme_tcp_queue_rq+0xef5/0x17b0 [nvme_tcp]
[ 5966.630588]
               stack backtrace:
[ 5966.634948] CPU: 24 PID: 5034 Comm: fio Tainted: G S        I
5.17.0-rc3 #2
[ 5966.642425] Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS
2.11.2 004/21/2021
[ 5966.650078] Call Trace:
[ 5966.652532]  <TASK>
[ 5966.654637]  dump_stack_lvl+0x44/0x57
[ 5966.658303]  check_noncircular+0x280/0x320
[ 5966.662403]  ? print_circular_bug.isra.47+0x430/0x430
[ 5966.667455]  ? mark_lock.part.52+0x107/0x1210
[ 5966.671814]  ? mark_lock.part.52+0x107/0x1210
[ 5966.676175]  ? lock_chain_count+0x20/0x20
[ 5966.680186]  check_prevs_add+0x3fd/0x2480
[ 5966.684202]  ? check_irq_usage+0xb50/0xb50
[ 5966.688296]  ? lockdep_lock+0xcb/0x1c0
[ 5966.692051]  ? static_obj+0xc0/0xc0
[ 5966.695542]  ? sched_clock_cpu+0x15/0x200
[ 5966.699555]  __lock_acquire+0x2402/0x2fa0
[ 5966.703570]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 5966.708012]  lock_acquire+0x1d2/0x5a0
[ 5966.711679]  ? tcp_sendpage+0x23/0x50
[ 5966.715347]  ? rcu_read_unlock+0x50/0x50
[ 5966.719272]  ? sched_clock_cpu+0x15/0x200
[ 5966.723284]  lock_sock_nested+0x37/0xd0
[ 5966.727121]  ? tcp_sendpage+0x23/0x50
[ 5966.730789]  tcp_sendpage+0x23/0x50
[ 5966.734281]  inet_sendpage+0xb3/0x130
[ 5966.737946]  kernel_sendpage+0x1c0/0x2f0
[ 5966.741873]  nvme_tcp_try_send+0x552/0x2250 [nvme_tcp]
[ 5966.747012]  ? lock_is_held_type+0xd9/0x130
[ 5966.751199]  ? nvme_tcp_try_send_ddgst+0x330/0x330 [nvme_tcp]
[ 5966.756944]  ? ww_mutex_unlock+0x2f0/0x2f0
[ 5966.761044]  ? __rq_qos_issue+0x4f/0x90
[ 5966.764885]  nvme_tcp_queue_rq+0xf3b/0x17b0 [nvme_tcp]
[ 5966.770021]  ? __lock_acquire+0xc31/0x2fa0
[ 5966.774120]  __blk_mq_try_issue_directly+0x448/0x660
[ 5966.779086]  ? __blk_mq_get_driver_tag+0xa70/0xa70
[ 5966.783877]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 5966.788325]  blk_mq_plug_issue_direct.constprop.75+0x13a/0x210
[ 5966.794158]  ? blk_mq_requeue_work+0x6c0/0x6c0
[ 5966.798601]  ? lock_is_held_type+0xd9/0x130
[ 5966.802789]  blk_mq_flush_plug_list+0x828/0xfd0
[ 5966.807321]  ? blk_mq_flush_plug_list+0x7d2/0xfd0
[ 5966.812028]  ? submit_bio_noacct+0x31d/0x840
[ 5966.816299]  ? blk_mq_insert_requests+0x590/0x590
[ 5966.821003]  ? find_held_lock+0x3a/0x1c0
[ 5966.824932]  blk_flush_plug+0x2b8/0x4c0
[ 5966.828771]  ? blk_start_plug_nr_ios+0x270/0x270
[ 5966.833390]  ? submit_bio_noacct+0x840/0x840
[ 5966.837663]  blk_finish_plug+0x47/0x90
[ 5966.841413]  _xfs_buf_ioapply+0x6ed/0xb70 [xfs]
[ 5966.846064]  ? __xfs_buf_submit+0x134/0x650 [xfs]
[ 5966.850878]  ? xfs_buf_get_map+0xbc0/0xbc0 [xfs]
[ 5966.855602]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 5966.860049]  __xfs_buf_submit+0x228/0x650 [xfs]
[ 5966.864693]  xfs_buf_read_map+0x365/0x970 [xfs]
[ 5966.869338]  ? xfs_btree_read_buf_block.constprop.28+0x1dc/0x340 [xfs]
[ 5966.875957]  ? find_held_lock+0x3a/0x1c0
[ 5966.879885]  ? _xfs_buf_read+0x110/0x110 [xfs]
[ 5966.884443]  ? lock_release+0x42f/0xc90
[ 5966.888285]  xfs_trans_read_buf_map+0x694/0x11b0 [xfs]
[ 5966.893542]  ? xfs_btree_read_buf_block.constprop.28+0x1dc/0x340 [xfs]
[ 5966.900163]  ? xfs_trans_getsb+0x690/0x690 [xfs]
[ 5966.904904]  ? __kasan_slab_alloc+0x6d/0x90
[ 5966.909091]  ? xfs_btree_ptr_to_daddr+0x24f/0x6b0 [xfs]
[ 5966.914422]  xfs_btree_read_buf_block.constprop.28+0x1dc/0x340 [xfs]
[ 5966.920867]  ? __kernel_text_address+0xe/0x30
[ 5966.925228]  ? xfs_btree_readahead_ptr.constprop.27+0x1f0/0x1f0 [xfs]
[ 5966.931765]  ? xfs_btree_ptr_to_daddr+0x24f/0x6b0 [xfs]
[ 5966.937083]  xfs_btree_lookup_get_block+0x1a6/0x630 [xfs]
[ 5966.942586]  ? filter_irq_stacks+0xa0/0xa0
[ 5966.946682]  ? xfs_btree_get_leaf_keys+0xdf/0x480 [xfs]
[ 5966.952004]  ? xfs_btree_dec_cursor+0xe0/0xe0 [xfs]
[ 5966.956979]  ? kasan_save_stack+0x2b/0x40
[ 5966.960992]  ? kasan_save_stack+0x1c/0x40
[ 5966.965005]  ? __kasan_slab_alloc+0x6d/0x90
[ 5966.969189]  ? kmem_cache_alloc+0x105/0x2d0
[ 5966.973377]  ? xfs_inobt_init_common+0x6e/0x300 [xfs]
[ 5966.978531]  ? xfs_inobt_init_cursor+0x4f/0x170 [xfs]
[ 5966.983688]  ? xfs_dialloc+0x893/0x11c0 [xfs]
[ 5966.988144]  ? xfs_create+0xb85/0x12e0 [xfs]
[ 5966.992529]  ? xfs_generic_create+0x25b/0x5a0 [xfs]
[ 5966.997524]  xfs_btree_lookup+0x39f/0xf20 [xfs]
[ 5967.002162]  ? xfs_btree_overlapped_query_range+0xfd0/0xfd0 [xfs]
[ 5967.008355]  ? find_held_lock+0x3a/0x1c0
[ 5967.012281]  ? lock_downgrade+0x6b0/0x6b0
[ 5967.016294]  ? lock_is_held_type+0xd9/0x130
[ 5967.020480]  ? xfs_inobt_init_common+0x6e/0x300 [xfs]
[ 5967.025634]  ? rcu_read_lock_sched_held+0xaf/0xe0
[ 5967.030340]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 5967.034788]  xfs_dialloc_ag_update_inobt+0x19d/0x4e0 [xfs]
[ 5967.040378]  ? xfs_dialloc_ag_finobt_newino.isra.19+0x550/0x550 [xfs]
[ 5967.046919]  ? xfs_inobt_init_common+0x203/0x300 [xfs]
[ 5967.052165]  xfs_dialloc+0x8a7/0x11c0 [xfs]
[ 5967.056458]  ? xfs_ialloc_pagi_init+0xc0/0xc0 [xfs]
[ 5967.061437]  ? down_write_nested+0x185/0x3b0
[ 5967.065708]  ? _down_write_nest_lock+0x3b0/0x3b0
[ 5967.070328]  ? rcu_read_lock_bh_held+0xc0/0xc0
[ 5967.074771]  ? xfs_trans_alloc_inode+0x2c0/0x2c0 [xfs]
[ 5967.080026]  ? xfs_create+0x64b/0x12e0 [xfs]
[ 5967.084420]  xfs_create+0xb85/0x12e0 [xfs]
[ 5967.088634]  ? xfs_init_new_inode+0x18a0/0x18a0 [xfs]
[ 5967.093806]  ? get_cached_acl+0x185/0x390
[ 5967.097817]  ? rcu_read_lock_held+0xaf/0xc0
[ 5967.102003]  ? rcu_read_lock_sched_held+0xe0/0xe0
[ 5967.106709]  ? _raw_spin_unlock_irqrestore+0x30/0x50
[ 5967.111675]  ? get_cached_acl+0x185/0x390
[ 5967.115686]  ? get_cached_acl+0x272/0x390
[ 5967.119699]  ? posix_acl_fix_xattr_userns+0x290/0x290
[ 5967.124756]  ? get_acl+0x18/0x260
[ 5967.128082]  xfs_generic_create+0x25b/0x5a0 [xfs]
[ 5967.132909]  ? xfs_setup_iops+0x3b0/0x3b0 [xfs]
[ 5967.137553]  ? may_create+0x2ef/0x370
[ 5967.141217]  ? selinux_inode_init_security+0x730/0x730
[ 5967.146359]  ? selinux_inode_permission+0x288/0x3b0
[ 5967.151238]  ? selinux_sk_alloc_security+0x1d0/0x1d0
[ 5967.156202]  ? from_kgid+0x83/0xc0
[ 5967.159610]  lookup_open+0xea8/0x1850
[ 5967.163277]  ? lock_acquired+0x262/0xb10
[ 5967.167198]  ? lookup_positive_unlocked+0x70/0x70
[ 5967.171910]  ? __mnt_want_write+0x15b/0x240
[ 5967.176101]  path_openat+0x705/0x2370
[ 5967.179766]  ? do_syscall_64+0x27/0x80
[ 5967.183521]  ? filename_lookup.part.60+0x390/0x390
[ 5967.188312]  ? lock_is_held_type+0xd9/0x130
[ 5967.192497]  ? alloc_fd+0x234/0x570
[ 5967.195991]  ? sched_clock_cpu+0x15/0x200
[ 5967.200002]  do_filp_open+0x1a4/0x280
[ 5967.203668]  ? lock_release+0x42f/0xc90
[ 5967.207508]  ? may_open_dev+0xc0/0xc0
[ 5967.211177]  ? do_raw_spin_unlock+0x54/0x230
[ 5967.215459]  do_sys_openat2+0x30d/0x670
[ 5967.219302]  ? file_open_root+0x200/0x200
[ 5967.223316]  ? ktime_get_coarse_real_ts64+0x11c/0x160
[ 5967.228367]  do_sys_open+0x8a/0xd0
[ 5967.231772]  ? filp_open+0x50/0x50
[ 5967.235178]  ? syscall_trace_enter.isra.15+0x190/0x270
[ 5967.240319]  do_syscall_64+0x37/0x80
[ 5967.243896]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 5967.248948] RIP: 0033:0x7ff075c3f296
[ 5967.252527] Code: 89 54 24 08 e8 9b f4 ff ff 8b 74 24 0c 48 8b 3c
24 41 89 c0 44 8b 54 24 08 b8 01 01 00 00 89 f2 48 89 fe bf 9c ff ff
ff 0f 05 <48> 3d 00 f0 ff ff 77 30 44 89 c7 89 44 24 08 e8 c6 f4 ff ff
8b 44
[ 5967.271273] RSP: 002b:00007ffc916b0af0 EFLAGS: 00000293 ORIG_RAX:
0000000000000101
[ 5967.278839] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff075c3f296
[ 5967.285971] RDX: 0000000000000041 RSI: 00007ff0640ff290 RDI: 00000000ffffff9c
[ 5967.293103] RBP: 0000000000000000 R08: 0000000000000000 R09: 00007ff0754627e0
[ 5967.300235] R10: 00000000000001a4 R11: 0000000000000293 R12: 000000003b600000
[ 5967.307370] R13: 0000000000000000 R14: 00007ff0640ff110 R15: 00007ff00b494000
[ 5967.314507]  </TASK>
[ 6105.703881] XFS (nvme0n1): Unmounting Filesystem
[ 6105.765404] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
-- 
Best Regards,
  Yi Zhang



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013
  2021-09-29 11:06 ` Sagi Grimberg
  2022-02-11  5:38   ` Yi Zhang
@ 2022-02-15  4:21   ` Chris Leech
  2022-02-15 15:18     ` Chris Leech
  1 sibling, 1 reply; 6+ messages in thread
From: Chris Leech @ 2022-02-15  4:21 UTC (permalink / raw)
  To: Sagi Grimberg, Yi Zhang, linux-nvme

On 9/29/21 4:06 AM, Sagi Grimberg wrote:
>> Hello
>>
>> Bellow warning was triggered with blktests nvme-tcp on latest
>> 5.15.0-rc3, pls help check it.
> 
> Hey Yi, thanks for reporting..
> 
> I'm trying to understand the circular locking here. I'm trying to
> understand if this is due to having an allocation in the
> network send path that may cause a circular dependency.

Hi.  I've spent a bit of time trying to make sense of this lockdep
complaint, and I have a few observations and a few experiments.

First, the stack trace to where lockdep complains is a direct send
from nvme_tcp_queue_request, and holds an impresive number of locks as
you'd expect from a single stack trace that goes from a vfs syscall,
through XFS, the block_mq layers, the nvme driver, and finally into
the tcp code for a sendpage call.

Second, the hypothetical conflicting lock chain involves a setsockopt
syscall which could cause a page fault during a copy_from_user while
holding the sk_lock.  This seems, to me, to be the key to how lockdep
thinks the sk_lock could be held before the various xfs locks.

If that could happen, I suppose that holding a sk_lock waiting for a
page fault to be handled, while that same socket was the transport
path to the storage that needed to be accessed, would be a very bad
thing indeed.  But the socket used by nvme-tcp is never exposed to
userspace, so this looks like a false positive.  And lockdep isn't
complaining about the lock recursing, but a circular dependancy with
the xfs locks.

Experiment #1, disable the direct mode to always send from workqueue
context.  This seperates the sk_lock hold by kernel_sendpage from the
filesystem locks. A quick hack, but it confirms that this breaks the
lockdep cycle.

--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -294,7 +294,7 @@ static inline void nvme_tcp_queue_request(struct
nvme_tcp_request *req,
 	 * directly, otherwise queue io_work. Also, only do that if we
 	 * are on the same cpu, so we don't introduce contention.
 	 */
-	if (queue->io_cpu == raw_smp_processor_id() &&
+	if (false && queue->io_cpu == raw_smp_processor_id() &&
 	    sync && empty && mutex_trylock(&queue->send_mutex)) {
 		queue->more_requests = !last;
 		nvme_tcp_send_all(queue);


Experiment #2, create a dedicated lock_class_key for the socket used
by nvme-tcp.  This seperates lockdep analysis from general AF_INET
socket use.  Becuase the socket used by nvme-tcp is only used from the
kernel interfaces, there's never a copy_from_user with the sk_lock
held.  As an example, the same thing is done by the ceph networking
code for similar reasons.

--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -14,12 +14,17 @@
 #include <linux/blk-mq.h>
 #include <crypto/hash.h>
 #include <net/busy_poll.h>
+#include <linux/lockdep.h>

 #include "nvme.h"
 #include "fabrics.h"

 struct nvme_tcp_queue;

+#ifdef CONFIG_LOCKDEP
+static struct lock_class_key nvme_tcp_socket_class;
+#endif
+
 /* Define the socket priority to use for connections were it is desirable
  * that the NIC consider performing optimized packet processing or
filtering.
  * A non-zero value being sufficient to indicate general
consideration of any
@@ -1427,6 +1432,10 @@ static int nvme_tcp_alloc_queue(struct
nvme_ctrl *nctrl,
 		goto err_destroy_mutex;
 	}

+#ifdef CONFIG_LOCKDEP
+	lockdep_set_class(&queue->sock->sk->sk_lock, &nvme_tcp_socket_class);
+#endif
+
 	/* Single syn retry */
 	tcp_sock_set_syncnt(queue->sock->sk, 1);


I'm not sure why I can't reproduce this with ext4 as the filesystem.

Also, I might have a sleeping function called in atomic context report
with a similar enough call trace to take a closer look at the direct send.

- Chris Leech


>> [ 2153.718873] run blktests nvme/013 at 2021-09-28 07:14:57
>> [ 2154.082894] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
>> [ 2154.116627] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
>> [ 2154.174352] nvmet: creating controller 1 for subsystem
>> blktests-subsystem-1 for NQN
>> nqn.2014-08.org.nvmexpress:uuid:c83e4244a1d04d318ced2447da679f67.
>> [ 2154.196889] nvme nvme0: creating 40 I/O queues.
>> [ 2154.222389] nvme nvme0: mapped 40/0/0 default/read/poll queues.
>> [ 2154.275779] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
>> 127.0.0.1:4420
>> [ 2155.648483] XFS (nvme0n1): Mounting V5 Filesystem
>> [ 2155.717630] XFS (nvme0n1): Ending clean mount
>> [ 2155.734834] xfs filesystem being mounted at /mnt/blktests supports
>> timestamps until 2038 (0x7fffffff)
>>
>> [ 2155.932675] ======================================================
>> [ 2155.939575] WARNING: possible circular locking dependency detected
>> [ 2155.946467] 5.15.0-rc3 #1 Not tainted
>> [ 2155.950551] ------------------------------------------------------
>> [ 2155.957447] fio/1496 is trying to acquire lock:
>> [ 2155.962504] ffff93d58ffe2470 (sk_lock-AF_INET){+.+.}-{0:0}, at:
>> tcp_sendpage+0x23/0x80
>> [ 2155.971355]
>>                 but task is already holding lock:
>> [ 2155.977866] ffff93d59bb7f8f0 (&xfs_dir_ilock_class/5){+.+.}-{3:3},
>> at: xfs_ilock+0xcf/0x290 [xfs]
>> [ 2155.987885]
>>                 which lock already depends on the new lock.
>>
>> [ 2155.997012]
>>                 the existing dependency chain (in reverse order) is:
>> [ 2156.005362]
>>                 -> #3 (&xfs_dir_ilock_class/5){+.+.}-{3:3}:
>> [ 2156.012843]        down_write_nested+0x49/0x120
>> [ 2156.017904]        xfs_ilock+0xcf/0x290 [xfs]
>> [ 2156.022850]        xfs_create+0x1e8/0x660 [xfs]
>> [ 2156.027989]        xfs_generic_create+0xfa/0x340 [xfs]
>> [ 2156.033805]        vfs_mkdir+0x12b/0x1e0
>> [ 2156.038194]        do_mkdirat+0x10a/0x130
>> [ 2156.042669]        __x64_sys_mkdir+0x48/0x70
>> [ 2156.047434]        do_syscall_64+0x3b/0x90
>> [ 2156.052013]        entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 2156.058238]
>>                 -> #2 (sb_internal){.+.+}-{0:0}:
>> [ 2156.064654]        xfs_trans_alloc+0x1a1/0x3c0 [xfs]
>> [ 2156.070281]        xfs_vn_update_time+0xc6/0x260 [xfs]
>> [ 2156.076097]        touch_atime+0x129/0x230
>> [ 2156.080672]        xfs_file_mmap+0x84/0x90 [xfs]
>> [ 2156.085906]        mmap_region+0x497/0x6b0
>> [ 2156.090480]        do_mmap+0x362/0x560
>> [ 2156.094663]        vm_mmap_pgoff+0xb8/0x150
>> [ 2156.099330]        ksys_mmap_pgoff+0x198/0x1f0
>> [ 2156.104288]        do_syscall_64+0x3b/0x90
>> [ 2156.108860]        entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 2156.115078]
>>                 -> #1 (&mm->mmap_lock#2){++++}-{3:3}:
>> [ 2156.121978]        __might_fault+0x56/0x80
>> [ 2156.126553]        _copy_from_user+0x20/0xa0
>> [ 2156.131327]        ip_setsockopt+0x123d/0x15c0
>> [ 2156.136287]        __sys_setsockopt+0xba/0x1a0
>> [ 2156.141250]        __x64_sys_setsockopt+0x1b/0x20
>> [ 2156.146499]        do_syscall_64+0x3b/0x90
>> [ 2156.151069]        entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 2156.157289]
>>                 -> #0 (sk_lock-AF_INET){+.+.}-{0:0}:
>> [ 2156.164093]        __lock_acquire+0x11fe/0x1e00
>> [ 2156.169154]        lock_acquire+0xc4/0x2e0
>> [ 2156.173727]        lock_sock_nested+0x2e/0x80
>> [ 2156.178593]        tcp_sendpage+0x23/0x80
>> [ 2156.183068]        inet_sendpage+0x4f/0x80
>> [ 2156.187645]        kernel_sendpage+0x62/0xf0
>> [ 2156.192413]        nvme_tcp_try_send+0x128/0x810 [nvme_tcp]
>> [ 2156.198633]        nvme_tcp_queue_rq+0x35a/0x380 [nvme_tcp]
>> [ 2156.204853]        __blk_mq_try_issue_directly+0x120/0x1c0
>> [ 2156.210983]        blk_mq_try_issue_list_directly+0x157/0x250
>> [ 2156.217400]        blk_mq_sched_insert_requests+0x241/0x2a0
>> [ 2156.223620]        blk_mq_flush_plug_list+0x114/0x210
>> [ 2156.229256]        blk_finish_plug+0x21/0x30
>> [ 2156.234028]        _xfs_buf_ioapply+0x30a/0x3c0 [xfs]
>> [ 2156.239747]        __xfs_buf_submit+0x92/0x260 [xfs]
>> [ 2156.245360]        xfs_buf_read_map+0xe9/0x2a0 [xfs]
>> [ 2156.250969]        xfs_trans_read_buf_map+0x1bb/0x550 [xfs]
>> [ 2156.257274]        xfs_btree_read_buf_block.constprop.0+0x75/0xb0
>> [xfs]
>> [ 2156.264732]        xfs_btree_lookup_get_block+0x87/0x160 [xfs]
>> [ 2156.271316]        xfs_btree_lookup+0x123/0x4d0 [xfs]
>> [ 2156.277027]        xfs_dialloc_ag_update_inobt+0x3a/0x180 [xfs]
>> [ 2156.283709]        xfs_dialloc+0x43f/0x800 [xfs]
>> [ 2156.288938]        xfs_create+0x404/0x660 [xfs]
>> [ 2156.294075]        xfs_generic_create+0xfa/0x340 [xfs]
>> [ 2156.299891]        lookup_open.isra.0+0x537/0x670
>> [ 2156.305141]        path_openat+0x2aa/0xa20
>> [ 2156.309713]        do_filp_open+0x9f/0x130
>> [ 2156.314290]        do_sys_openat2+0x7a/0x140
>> [ 2156.319057]        __x64_sys_openat+0x45/0x70
>> [ 2156.323918]        do_syscall_64+0x3b/0x90
>> [ 2156.328491]        entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 2156.334712]
>>                 other info that might help us debug this:
>>
>> [ 2156.343644] Chain exists of:
>>                   sk_lock-AF_INET --> sb_internal -->
>> &xfs_dir_ilock_class/5
>>
>> [ 2156.355972]  Possible unsafe locking scenario:
>>
>> [ 2156.362579]        CPU0                    CPU1
>> [ 2156.367632]        ----                    ----
>> [ 2156.372685]   lock(&xfs_dir_ilock_class/5);
>> [ 2156.377351]                                lock(sb_internal);
>> [ 2156.383764]                               
>> lock(&xfs_dir_ilock_class/5);
>> [ 2156.391242]   lock(sk_lock-AF_INET);
>> [ 2156.395230]
>>                  *** DEADLOCK ***
>>
>> [ 2156.401834] 6 locks held by fio/1496:
>> [ 2156.405919]  #0: ffff93d584c36498 (sb_writers#13){.+.+}-{0:0}, at:
>> path_openat+0x9fc/0xa20
>> [ 2156.415147]  #1: ffff93d59bb7fb30
>> (&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at:
>> path_openat+0x296/0xa20
>> [ 2156.426693]  #2: ffff93d584c366b8 (sb_internal){.+.+}-{0:0}, at:
>> xfs_trans_alloc_icreate+0x41/0xd0 [xfs]
>> [ 2156.437363]  #3: ffff93d59bb7f8f0
>> (&xfs_dir_ilock_class/5){+.+.}-{3:3}, at: xfs_ilock+0xcf/0x290 [xfs]
>> [ 2156.447838]  #4: ffff93d5d75f4c60 (hctx->srcu){....}-{0:0}, at:
>> hctx_lock+0x51/0xd0
>> [ 2156.456386]  #5: ffff93d59d7bc3f0 (&queue->send_mutex){+.+.}-{3:3},
>> at: nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]
>> [ 2156.467748]
>>                 stack backtrace:
>> [ 2156.472612] CPU: 27 PID: 1496 Comm: fio Not tainted 5.15.0-rc3 #1
>> [ 2156.479410] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS
>> RMLSDP.86I.R3.27.D685.1305151734 05/15/2013
>> [ 2156.490578] Call Trace:
>> [ 2156.493312]  dump_stack_lvl+0x57/0x72
>> [ 2156.497404]  check_noncircular+0xdf/0x100
>> [ 2156.501879]  ? usage_match+0x20/0x20
>> [ 2156.505867]  ? __bfs+0xf5/0x230
>> [ 2156.509373]  __lock_acquire+0x11fe/0x1e00
>> [ 2156.513850]  lock_acquire+0xc4/0x2e0
>> [ 2156.517829]  ? tcp_sendpage+0x23/0x80
>> [ 2156.521917]  lock_sock_nested+0x2e/0x80
>> [ 2156.526199]  ? tcp_sendpage+0x23/0x80
>> [ 2156.530285]  tcp_sendpage+0x23/0x80
>> [ 2156.534179]  inet_sendpage+0x4f/0x80
>> [ 2156.538170]  kernel_sendpage+0x62/0xf0
>> [ 2156.542355]  nvme_tcp_try_send+0x128/0x810 [nvme_tcp]
>> [ 2156.547994]  ? mutex_trylock+0x13f/0x160
>> [ 2156.552374]  ? nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]
>> [ 2156.558210]  nvme_tcp_queue_rq+0x35a/0x380 [nvme_tcp]
>> [ 2156.563850]  __blk_mq_try_issue_directly+0x120/0x1c0
>> [ 2156.569395]  blk_mq_try_issue_list_directly+0x157/0x250
>> [ 2156.575230]  blk_mq_sched_insert_requests+0x241/0x2a0
>> [ 2156.580869]  blk_mq_flush_plug_list+0x114/0x210
>> [ 2156.585926]  blk_finish_plug+0x21/0x30
>> [ 2156.590110]  _xfs_buf_ioapply+0x30a/0x3c0 [xfs]
>> [ 2156.595252]  __xfs_buf_submit+0x92/0x260 [xfs]
>> [ 2156.600295]  xfs_buf_read_map+0xe9/0x2a0 [xfs]
>> [ 2156.605335]  xfs_trans_read_buf_map+0x1bb/0x550 [xfs]
>> [ 2156.611060]  ? xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
>> [ 2156.618129]  xfs_btree_read_buf_block.constprop.0+0x75/0xb0 [xfs]
>> [ 2156.625006]  xfs_btree_lookup_get_block+0x87/0x160 [xfs]
>> [ 2156.631008]  xfs_btree_lookup+0x123/0x4d0 [xfs]
>> [ 2156.636137]  ? rcu_read_lock_sched_held+0x3f/0x80
>> [ 2156.641395]  ? kmem_cache_alloc+0x28a/0x340
>> [ 2156.646070]  xfs_dialloc_ag_update_inobt+0x3a/0x180 [xfs]
>> [ 2156.652172]  xfs_dialloc+0x43f/0x800 [xfs]
>> [ 2156.656822]  ? lock_acquired+0xbc/0x380
>> [ 2156.661104]  xfs_create+0x404/0x660 [xfs]
>> [ 2156.665663]  xfs_generic_create+0xfa/0x340 [xfs]
>> [ 2156.670900]  lookup_open.isra.0+0x537/0x670
>> [ 2156.675573]  path_openat+0x2aa/0xa20
>> [ 2156.679564]  do_filp_open+0x9f/0x130
>> [ 2156.683554]  ? lock_release+0x28a/0x400
>> [ 2156.687840]  ? _raw_spin_unlock+0x1f/0x30
>> [ 2156.692314]  ? alloc_fd+0x12f/0x1f0
>> [ 2156.696210]  do_sys_openat2+0x7a/0x140
>> [ 2156.700394]  __x64_sys_openat+0x45/0x70
>> [ 2156.704676]  do_syscall_64+0x3b/0x90
>> [ 2156.708658]  entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 2156.714297] RIP: 0033:0x7fcb40bc7584
>> [ 2156.718290] Code: 24 20 eb 8f 66 90 44 89 54 24 0c e8 56 8b f8 ff
>> 44 8b 54 24 0c 44 89 e2 48 89 ee 41 89 c0 bf 9c ff ff ff b8 01 01 00
>> 00 0f 05 <48> 3d 00 f0 ff ff 77 34 44 89 c7 89 44 24 0c e8 98 8b f8 ff
>> 8b 44
>> [ 2156.739247] RSP: 002b:00007ffda2d2e600 EFLAGS: 00000293 ORIG_RAX:
>> 0000000000000101
>> [ 2156.747698] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
>> 00007fcb40bc7584
>> [ 2156.755660] RDX: 0000000000000041 RSI: 00007fcb3fad72b0 RDI:
>> 00000000ffffff9c
>> [ 2156.763626] RBP: 00007fcb3fad72b0 R08: 0000000000000000 R09:
>> 0000000000000000
>> [ 2156.771591] R10: 00000000000001a4 R11: 0000000000000293 R12:
>> 0000000000000041
>> [ 2156.779553] R13: 0000000000000000 R14: 00007fcb3fad7110 R15:
>> 000000003b600000
>>
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
> 



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013
  2022-02-15  4:21   ` Chris Leech
@ 2022-02-15 15:18     ` Chris Leech
  2022-02-20 12:56       ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Chris Leech @ 2022-02-15 15:18 UTC (permalink / raw)
  To: Sagi Grimberg, Yi Zhang, linux-nvme

On 2/14/22 8:21 PM, Chris Leech wrote:
> Also, I might have a sleeping function called in atomic context report
> with a similar enough call trace to take a closer look at the direct send.

That looks to be a backport run with device mapper multipath, and a
repeat of the dm-rq BLK_MQ_F_BLOCKING discussion on linux-block.

I currently don't see a reason why the lockdep annotation wouldn't be
enough for this.  I'll send a real patch if that's the way we want to go.

- Chris



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013
  2022-02-15 15:18     ` Chris Leech
@ 2022-02-20 12:56       ` Sagi Grimberg
  0 siblings, 0 replies; 6+ messages in thread
From: Sagi Grimberg @ 2022-02-20 12:56 UTC (permalink / raw)
  To: Chris Leech, Yi Zhang, linux-nvme


>> Also, I might have a sleeping function called in atomic context report
>> with a similar enough call trace to take a closer look at the direct send.
> 
> That looks to be a backport run with device mapper multipath, and a
> repeat of the dm-rq BLK_MQ_F_BLOCKING discussion on linux-block.
> 
> I currently don't see a reason why the lockdep annotation wouldn't be
> enough for this.  I'll send a real patch if that's the way we want to go.

Hey Chris,

Are you sending a patch for this? You can add my review tag when you do.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-02-20 12:56 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-28 11:20 [bug report] WARNING: possible circular locking dependency detected with blktests nvme-tcp nvme/013 Yi Zhang
2021-09-29 11:06 ` Sagi Grimberg
2022-02-11  5:38   ` Yi Zhang
2022-02-15  4:21   ` Chris Leech
2022-02-15 15:18     ` Chris Leech
2022-02-20 12:56       ` Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).