* [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O
@ 2020-07-22 23:32 Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 1/2] nvme: document nvme controller states Sagi Grimberg
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Sagi Grimberg @ 2020-07-22 23:32 UTC (permalink / raw)
To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman, James Smart
Changes from v2:
- remove nvme_remove_namespaces split (Christoph)
- tweaked nvme_path_is_disabled to look better (Christoph)
Changes from v1:
- Rename states to NVME_CTRL_DELETING and NVME_CTRL_DELETING_NOIO to better
describe the states
- Added prep patch to split nvme_remove_namespaces to _prep_ and _do_
- Added prep patch to provide some documentation about the states
--------------------------------------------------------------------------
A deadlock happens in the following scenario with multipath:
1) scan_work(nvme0) detects a new nsid while nvme0
is an optimized path to it, path nvme1 happens to be
inaccessible.
2) Before scan_work is complete nvme0 disconnect is initiated
nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING
3) scan_work(1) attempts to submit IO,
but nvme_path_is_optimized() observes nvme0 is not LIVE.
Since nvme1 is a possible path IO is requeued and scan_work hangs.
4) Delete also hangs in flush_work(ctrl->scan_work)
from nvme_remove_namespaces().
Similiarly a deadlock with ana_work may happen: if ana_work has started
and calls nvme_mpath_set_live and device_add_disk, it will
trigger I/O. When we trigger disconnect I/O will block because
our accessible (optimized) path is disconnecting, but the alternate
path is inaccessible, so I/O blocks. Then disconnect tries to flush
the ana_work and hangs.
This patchset alters the nvme states to address this deadlock condition.
Sagi Grimberg (2):
nvme: document nvme controller states
nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
drivers/nvme/host/core.c | 15 +++++++++++++++
drivers/nvme/host/fc.c | 1 +
drivers/nvme/host/multipath.c | 18 +++++++++++++++---
drivers/nvme/host/nvme.h | 20 ++++++++++++++++++++
drivers/nvme/host/rdma.c | 10 ++++++----
drivers/nvme/host/tcp.c | 15 +++++++++------
6 files changed, 66 insertions(+), 13 deletions(-)
--
2.25.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/2] nvme: document nvme controller states
2020-07-22 23:32 [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
@ 2020-07-22 23:32 ` Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg
2020-07-23 9:02 ` [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Christoph Hellwig
2 siblings, 0 replies; 9+ messages in thread
From: Sagi Grimberg @ 2020-07-22 23:32 UTC (permalink / raw)
To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman, James Smart
We are starting to see some non-trivial states
so lets start documenting them.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/nvme.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 2ee91a3dd8e0..92629758b77c 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -181,6 +181,20 @@ static inline u16 nvme_req_qid(struct request *req)
*/
#define NVME_QUIRK_DELAY_AMOUNT 2300
+/*
+ * enum nvme_ctrl_state: Controller state
+ *
+ * @NVME_CTRL_NEW: New controller just allocated, initial state
+ * @NVME_CTRL_LIVE: Controller is connected and I/O capable
+ * @NVME_CTRL_RESETTING: Controller is resetting (or scheduled reset)
+ * @NVME_CTRL_CONNECTING: Controller is disconnected, now connecting the
+ * transport
+ * @NVME_CTRL_DELETING: Controller is deleting (or scheduled deletion)
+ * @NVME_CTRL_DEAD: Controller is non-present/unresponsive during
+ * shutdown or removal. In this case we forcibly
+ * kill all inflight I/O as they have no chance to
+ * complete
+ */
enum nvme_ctrl_state {
NVME_CTRL_NEW,
NVME_CTRL_LIVE,
--
2.25.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
2020-07-22 23:32 [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 1/2] nvme: document nvme controller states Sagi Grimberg
@ 2020-07-22 23:32 ` Sagi Grimberg
2020-07-23 23:56 ` Logan Gunthorpe
2020-07-23 9:02 ` [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Christoph Hellwig
2 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2020-07-22 23:32 UTC (permalink / raw)
To: linux-nvme, Christoph Hellwig, Keith Busch; +Cc: Anton Eidelman, James Smart
A deadlock happens in the following scenario with multipath:
1) scan_work(nvme0) detects a new nsid while nvme0
is an optimized path to it, path nvme1 happens to be
inaccessible.
2) Before scan_work is complete nvme0 disconnect is initiated
nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING
3) scan_work(1) attempts to submit IO,
but nvme_path_is_optimized() observes nvme0 is not LIVE.
Since nvme1 is a possible path IO is requeued and scan_work hangs.
--
Workqueue: nvme-wq nvme_scan_work [nvme_core]
kernel: Call Trace:
kernel: __schedule+0x2b9/0x6c0
kernel: schedule+0x42/0xb0
kernel: io_schedule+0x16/0x40
kernel: do_read_cache_page+0x438/0x830
kernel: read_cache_page+0x12/0x20
kernel: read_dev_sector+0x27/0xc0
kernel: read_lba+0xc1/0x220
kernel: efi_partition+0x1e6/0x708
kernel: check_partition+0x154/0x244
kernel: rescan_partitions+0xae/0x280
kernel: __blkdev_get+0x40f/0x560
kernel: blkdev_get+0x3d/0x140
kernel: __device_add_disk+0x388/0x480
kernel: device_add_disk+0x13/0x20
kernel: nvme_mpath_set_live+0x119/0x140 [nvme_core]
kernel: nvme_update_ns_ana_state+0x5c/0x60 [nvme_core]
kernel: nvme_set_ns_ana_state+0x1e/0x30 [nvme_core]
kernel: nvme_parse_ana_log+0xa1/0x180 [nvme_core]
kernel: nvme_mpath_add_disk+0x47/0x90 [nvme_core]
kernel: nvme_validate_ns+0x396/0x940 [nvme_core]
kernel: nvme_scan_work+0x24f/0x380 [nvme_core]
kernel: process_one_work+0x1db/0x380
kernel: worker_thread+0x249/0x400
kernel: kthread+0x104/0x140
--
4) Delete also hangs in flush_work(ctrl->scan_work)
from nvme_remove_namespaces().
Similiarly a deadlock with ana_work may happen: if ana_work has started
and calls nvme_mpath_set_live and device_add_disk, it will
trigger I/O. When we trigger disconnect I/O will block because
our accessible (optimized) path is disconnecting, but the alternate
path is inaccessible, so I/O blocks. Then disconnect tries to flush
the ana_work and hangs.
[ 605.550896] Workqueue: nvme-wq nvme_ana_work [nvme_core]
[ 605.552087] Call Trace:
[ 605.552683] __schedule+0x2b9/0x6c0
[ 605.553507] schedule+0x42/0xb0
[ 605.554201] io_schedule+0x16/0x40
[ 605.555012] do_read_cache_page+0x438/0x830
[ 605.556925] read_cache_page+0x12/0x20
[ 605.557757] read_dev_sector+0x27/0xc0
[ 605.558587] amiga_partition+0x4d/0x4c5
[ 605.561278] check_partition+0x154/0x244
[ 605.562138] rescan_partitions+0xae/0x280
[ 605.563076] __blkdev_get+0x40f/0x560
[ 605.563830] blkdev_get+0x3d/0x140
[ 605.564500] __device_add_disk+0x388/0x480
[ 605.565316] device_add_disk+0x13/0x20
[ 605.566070] nvme_mpath_set_live+0x5e/0x130 [nvme_core]
[ 605.567114] nvme_update_ns_ana_state+0x2c/0x30 [nvme_core]
[ 605.568197] nvme_update_ana_state+0xca/0xe0 [nvme_core]
[ 605.569360] nvme_parse_ana_log+0xa1/0x180 [nvme_core]
[ 605.571385] nvme_read_ana_log+0x76/0x100 [nvme_core]
[ 605.572376] nvme_ana_work+0x15/0x20 [nvme_core]
[ 605.573330] process_one_work+0x1db/0x380
[ 605.574144] worker_thread+0x4d/0x400
[ 605.574896] kthread+0x104/0x140
[ 605.577205] ret_from_fork+0x35/0x40
[ 605.577955] INFO: task nvme:14044 blocked for more than 120 seconds.
[ 605.579239] Tainted: G OE 5.3.5-050305-generic #201910071830
[ 605.580712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 605.582320] nvme D 0 14044 14043 0x00000000
[ 605.583424] Call Trace:
[ 605.583935] __schedule+0x2b9/0x6c0
[ 605.584625] schedule+0x42/0xb0
[ 605.585290] schedule_timeout+0x203/0x2f0
[ 605.588493] wait_for_completion+0xb1/0x120
[ 605.590066] __flush_work+0x123/0x1d0
[ 605.591758] __cancel_work_timer+0x10e/0x190
[ 605.593542] cancel_work_sync+0x10/0x20
[ 605.594347] nvme_mpath_stop+0x2f/0x40 [nvme_core]
[ 605.595328] nvme_stop_ctrl+0x12/0x50 [nvme_core]
[ 605.596262] nvme_do_delete_ctrl+0x3f/0x90 [nvme_core]
[ 605.597333] nvme_sysfs_delete+0x5c/0x70 [nvme_core]
[ 605.598320] dev_attr_store+0x17/0x30
Fix this by introducing a new state: NVME_CTRL_DELETE_NOIO, which will
indicate the phase of controller deletion where I/O cannot be allowed
to access the namespace. NVME_CTRL_DELETING still allows mpath I/O to
be issued to the bottom device, and only after we flush the ana_work
and scan_work (after nvme_stop_ctrl and nvme_prep_remove_namespaces)
we change the state to NVME_CTRL_DELETING_NOIO. Also we prevent ana_work
from re-firing by aborting early if we are not LIVE, so we should be safe
here.
In addition, change the transport drivers to follow the updated state
machine.
Fixes: 0d0b660f214d ("nvme: add ANA support")
Reported-by: Anton Eidelman <anton@lightbitslabs.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/core.c | 15 +++++++++++++++
drivers/nvme/host/fc.c | 1 +
drivers/nvme/host/multipath.c | 18 +++++++++++++++---
drivers/nvme/host/nvme.h | 6 ++++++
drivers/nvme/host/rdma.c | 10 ++++++----
drivers/nvme/host/tcp.c | 15 +++++++++------
6 files changed, 52 insertions(+), 13 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 1d7c7afb1348..c16bfdff2953 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -366,6 +366,16 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
break;
}
break;
+ case NVME_CTRL_DELETING_NOIO:
+ switch (old_state) {
+ case NVME_CTRL_DELETING:
+ case NVME_CTRL_DEAD:
+ changed = true;
+ /* FALLTHRU */
+ default:
+ break;
+ }
+ break;
case NVME_CTRL_DEAD:
switch (old_state) {
case NVME_CTRL_DELETING:
@@ -403,6 +413,7 @@ static bool nvme_state_terminal(struct nvme_ctrl *ctrl)
case NVME_CTRL_CONNECTING:
return false;
case NVME_CTRL_DELETING:
+ case NVME_CTRL_DELETING_NOIO:
case NVME_CTRL_DEAD:
return true;
default:
@@ -3476,6 +3487,7 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
[NVME_CTRL_RESETTING] = "resetting",
[NVME_CTRL_CONNECTING] = "connecting",
[NVME_CTRL_DELETING] = "deleting",
+ [NVME_CTRL_DELETING_NOIO]= "deleting (no IO)",
[NVME_CTRL_DEAD] = "dead",
};
@@ -4112,6 +4124,9 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
if (ctrl->state == NVME_CTRL_DEAD)
nvme_kill_queues(ctrl);
+ /* this is a no-op when called from the controller reset handler */
+ nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
+
down_write(&ctrl->namespaces_rwsem);
list_splice_init(&ctrl->namespaces, &ns_list);
up_write(&ctrl->namespaces_rwsem);
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 6aa30bb5a762..b27c54dc6683 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -826,6 +826,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
break;
case NVME_CTRL_DELETING:
+ case NVME_CTRL_DELETING_NOIO:
default:
/* no action to take - let it delete */
break;
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 74bad4e3d377..900b35d47ec7 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -167,9 +167,18 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
static bool nvme_path_is_disabled(struct nvme_ns *ns)
{
- return ns->ctrl->state != NVME_CTRL_LIVE ||
- test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
- test_bit(NVME_NS_REMOVING, &ns->flags);
+ /*
+ * We don't treat NVME_CTRL_DELETING as a disabled path as I/O should
+ * still be able to complete assuming that the controller is connected.
+ * Otherwise it will fail immediately and return to the requeue list.
+ */
+ if (ns->ctrl->state != NVME_CTRL_LIVE &&
+ ns->ctrl->state != NVME_CTRL_DELETING)
+ return true;
+ if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &ns->flags))
+ return true;
+ return false;
}
static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
@@ -563,6 +572,9 @@ static void nvme_ana_work(struct work_struct *work)
{
struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work);
+ if (ctrl->state != NVME_CTRL_LIVE)
+ return;
+
nvme_read_ana_log(ctrl);
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 92629758b77c..1609267a1f0e 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -190,6 +190,11 @@ static inline u16 nvme_req_qid(struct request *req)
* @NVME_CTRL_CONNECTING: Controller is disconnected, now connecting the
* transport
* @NVME_CTRL_DELETING: Controller is deleting (or scheduled deletion)
+ * @NVME_CTRL_DELETING_NOIO: Controller is deleting and I/O is not
+ * disabled/failed immediately. This state comes
+ * after all async event processing took place and
+ * before ns removal and the controller deletion
+ * progress
* @NVME_CTRL_DEAD: Controller is non-present/unresponsive during
* shutdown or removal. In this case we forcibly
* kill all inflight I/O as they have no chance to
@@ -201,6 +206,7 @@ enum nvme_ctrl_state {
NVME_CTRL_RESETTING,
NVME_CTRL_CONNECTING,
NVME_CTRL_DELETING,
+ NVME_CTRL_DELETING_NOIO,
NVME_CTRL_DEAD,
};
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 467da08db309..5c3848974ccb 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1102,11 +1102,12 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
if (!changed) {
/*
- * state change failure is ok if we're in DELETING state,
+ * state change failure is ok if we started ctrl delete,
* unless we're during creation of a new controller to
* avoid races with teardown flow.
*/
- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
+ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
+ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
WARN_ON_ONCE(new);
ret = -EINVAL;
goto destroy_io;
@@ -1159,8 +1160,9 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
- /* state change failure is ok if we're in DELETING state */
- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
+ /* state change failure is ok if we started ctrl delete */
+ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
+ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
return;
}
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index b2e73e19ef01..8c8fb65ca928 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1950,11 +1950,12 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) {
/*
- * state change failure is ok if we're in DELETING state,
+ * state change failure is ok if we started ctrl delete,
* unless we're during creation of a new controller to
* avoid races with teardown flow.
*/
- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
+ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+ ctrl->state != NVME_CTRL_DELETING_NOIO);
WARN_ON_ONCE(new);
ret = -EINVAL;
goto destroy_io;
@@ -2010,8 +2011,9 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
blk_mq_unquiesce_queue(ctrl->admin_q);
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
- /* state change failure is ok if we're in DELETING state */
- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
+ /* state change failure is ok if we started ctrl delete */
+ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+ ctrl->state != NVME_CTRL_DELETING_NOIO);
return;
}
@@ -2046,8 +2048,9 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
nvme_tcp_teardown_ctrl(ctrl, false);
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
- /* state change failure is ok if we're in DELETING state */
- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
+ /* state change failure is ok if we started ctrl delete */
+ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
+ ctrl->state != NVME_CTRL_DELETING_NOIO);
return;
}
--
2.25.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O
2020-07-22 23:32 [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 1/2] nvme: document nvme controller states Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg
@ 2020-07-23 9:02 ` Christoph Hellwig
2 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-07-23 9:02 UTC (permalink / raw)
To: Sagi Grimberg
Cc: Keith Busch, Anton Eidelman, Christoph Hellwig, linux-nvme, James Smart
Thanks,
applied to nvme-5.9.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
2020-07-22 23:32 ` [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg
@ 2020-07-23 23:56 ` Logan Gunthorpe
2020-07-24 0:11 ` Sagi Grimberg
0 siblings, 1 reply; 9+ messages in thread
From: Logan Gunthorpe @ 2020-07-23 23:56 UTC (permalink / raw)
To: Sagi Grimberg, linux-nvme, Christoph Hellwig, Keith Busch
Cc: Anton Eidelman, James Smart
On 2020-07-22 5:32 p.m., Sagi Grimberg wrote:
> A deadlock happens in the following scenario with multipath:
> 1) scan_work(nvme0) detects a new nsid while nvme0
> is an optimized path to it, path nvme1 happens to be
> inaccessible.
>
> 2) Before scan_work is complete nvme0 disconnect is initiated
> nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING
>
> 3) scan_work(1) attempts to submit IO,
> but nvme_path_is_optimized() observes nvme0 is not LIVE.
> Since nvme1 is a possible path IO is requeued and scan_work hangs.
>
[snip]
>
> Fixes: 0d0b660f214d ("nvme: add ANA support")
> Reported-by: Anton Eidelman <anton@lightbitslabs.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
I just tested nvme-5.9 and, after bisecting, found that this commit is
hanging the nvme/031 test in blktests[1]. The test just rapidly creates,
connects and destroys nvmet subsystems. The dmesg trace is below but I
haven't really dug into root cause.
Thanks,
Logan
[1] https://github.com/osandov/blktests/blob/master/tests/nvme/031
--
[ 191.715456] run blktests nvme/031 at 2020-07-23 17:50:47
[ 191.839255] nvmet: adding nsid 1 to subsystem blktests-subsystem-0
[ 191.870671] nvmet: creating controller 1 for subsystem
blktests-subsystem-0 for NQN
nqn.2014-08.org.nvmexpress:uuid:951e55c4-2f33-4c8f-95f0-7983d9c2e9a4.
[ 191.876700] nvme nvme3: creating 4 I/O queues.
[ 191.880849] nvme nvme3: new ctrl: "blktests-subsystem-0"
[ 191.892297] nvme nvme3: Removing ctrl: NQN "blktests-subsystem-0"
[ 222.173354] nvmet: ctrl 1 keep-alive timer (15 seconds) expired!
[ 222.174547] nvmet: ctrl 1 fatal error occurred!
[ 364.000818] INFO: task kworker/u9:1:137 blocked for more than 120
seconds.
[ 364.002018] Not tainted
5.8.0-rc4-eid-vmlocalyes-dbg-01041-g97e9bb2b27bc #1442
[ 364.003069] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 364.004127] kworker/u9:1 D 0 137 2 0x00004000
[ 364.004924] Workqueue: nvme-wq nvme_scan_work
[ 364.005670] Call Trace:
[ 364.006096] __schedule+0x56b/0xcd0
[ 364.006692] ? __sched_text_start+0x8/0x8
[ 364.007376] ? mark_held_locks+0x86/0xb0
[ 364.008050] schedule+0x80/0x140
[ 364.008610] io_schedule+0x21/0x50
[ 364.009131] wait_on_page_bit_common+0x222/0x730
[ 364.009772] ? page_cache_free_page.isra.43+0x1f0/0x1f0
[ 364.010499] ? find_get_pages_range_tag+0x680/0x680
[ 364.011156] do_read_cache_page+0x697/0xb20
[ 364.011715] ? blkdev_get+0xe8/0x260
[ 364.012199] ? nvme_validate_ns+0x818/0x1020
[ 364.012828] ? process_one_work+0x66b/0xb70
[ 364.013393] ? generic_file_read_iter+0x1f0/0x1f0
[ 364.014026] ? create_object+0x46f/0x4f0
[ 364.014556] ? mark_held_locks+0x86/0xb0
[ 364.015143] read_cache_page+0x3a/0x50
[ 364.015651] read_part_sector+0x74/0x253
[ 364.016178] read_lba+0x1da/0x340
[ 364.016630] ? ultrix_partition+0x430/0x430
[ 364.017270] ? efi_partition+0x324/0xac3
[ 364.017817] ? kasan_kmalloc+0x9/0x10
[ 364.018322] ? kmem_cache_alloc_trace+0x11f/0x290
[ 364.018982] efi_partition+0x342/0xac3
[ 364.019488] ? widen_string+0x190/0x190
[ 364.020010] ? is_gpt_valid.part.5+0x630/0x630
[ 364.020600] ? format_decode+0xd6/0x5c0
[ 364.021191] ? enable_ptr_key_workfn+0x30/0x30
[ 364.021802] ? hex_string+0x2e0/0x2e0
[ 364.022325] ? memcpy+0x4d/0x60
[ 364.022752] ? vsnprintf+0x66f/0x8e0
[ 364.023238] ? pointer+0x5b0/0x5b0
[ 364.023705] ? add_part+0x1b0/0x1b0
[ 364.024176] ? vsprintf+0x20/0x20
[ 364.024634] ? is_gpt_valid.part.5+0x630/0x630
[ 364.025287] blk_add_partitions+0x1f1/0x650
[ 364.025856] bdev_disk_changed+0xd8/0x190
[ 364.026398] __blkdev_get+0x812/0xa50
[ 364.026896] ? __blkdev_put+0x400/0x400
[ 364.027411] ? bdget+0x1b8/0x1d0
[ 364.027848] ? blkdev_fsync+0x70/0x70
[ 364.028345] blkdev_get+0xe8/0x260
[ 364.028837] __device_add_disk+0x597/0x850
[ 364.029396] ? blk_alloc_devt+0x1d0/0x1d0
[ 364.029934] ? kobject_uevent_env+0xce/0xa90
[ 364.030513] ? __kasan_check_write+0x14/0x20
[ 364.031087] device_add_disk+0x13/0x20
[ 364.031591] nvme_mpath_set_live+0x1c7/0x1d0
[ 364.032161] ? nvme_ana_work+0x40/0x40
[ 364.032666] nvme_update_ns_ana_state+0x7f/0x90
[ 364.033303] nvme_mpath_add_disk+0x216/0x240
[ 364.033878] ? nvme_mpath_stop+0x40/0x40
[ 364.034416] nvme_validate_ns+0x818/0x1020
[ 364.034973] ? nvme_dev_ioctl+0x1e0/0x1e0
[ 364.035510] ? __blk_mq_free_request+0xf3/0x130
[ 364.036119] ? blk_mq_free_request+0x1ea/0x250
[ 364.036721] ? __nvme_submit_sync_cmd+0x186/0x310
[ 364.037376] ? kasan_unpoison_shadow+0x35/0x50
[ 364.037975] ? __kasan_kmalloc.constprop.14+0xc1/0xd0
[ 364.038649] ? nvme_scan_work+0x167/0x3d4
[ 364.039194] nvme_scan_work+0x259/0x3d4
[ 364.039715] ? nvme_fw_act_work+0x220/0x220
[ 364.040274] ? process_one_work+0x590/0xb70
[ 364.040864] ? lock_is_held_type+0xba/0xf0
[ 364.041422] ? rcu_read_lock_bh_held+0xc0/0xc0
[ 364.042016] ? trace_hardirqs_on+0x2d/0x120
[ 364.042581] process_one_work+0x66b/0xb70
[ 364.043117] ? worker_thread+0x146/0x6c0
[ 364.043650] ? pwq_dec_nr_in_flight+0x130/0x130
[ 364.044258] ? lockdep_hardirqs_off+0x64/0xa0
[ 364.045005] worker_thread+0x6e/0x6c0
[ 364.045510] ? __kasan_check_read+0x11/0x20
[ 364.046077] ? process_one_work+0xb70/0xb70
[ 364.046639] kthread+0x1e7/0x210
[ 364.047079] ? kthread_create_on_node+0xc0/0xc0
[ 364.047689] ret_from_fork+0x22/0x30
[ 364.048195] INFO: task nvme:536 blocked for more than 120 seconds.
[ 364.049045] Not tainted
5.8.0-rc4-eid-vmlocalyes-dbg-01041-g97e9bb2b27bc #1442
[ 364.050068] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 364.051098] nvme D 0 536 507 0x00000000
[ 364.051827] Call Trace:
[ 364.052167] __schedule+0x56b/0xcd0
[ 364.052644] ? __sched_text_start+0x8/0x8
[ 364.053208] ? do_raw_spin_lock+0x11e/0x1e0
[ 364.053773] ? lock_is_held_type+0xba/0xf0
[ 364.054328] schedule+0x80/0x140
[ 364.054768] schedule_preempt_disabled+0xe/0x10
[ 364.055372] __mutex_lock+0x6cf/0xba0
[ 364.055863] ? __flush_work+0x4a2/0x5b0
[ 364.056379] ? nvme_mpath_clear_ctrl_paths+0x2f/0xb0
[ 364.057065] ? mutex_trylock+0x190/0x190
[ 364.057596] ? __flush_work+0x508/0x5b0
[ 364.058116] ? mark_held_locks+0x86/0xb0
[ 364.058646] ? lockdep_hardirqs_on+0x71/0xf0
[ 364.059216] ? __cancel_work_timer+0x11e/0x2d0
[ 364.059815] mutex_lock_nested+0x1b/0x20
[ 364.060340] ? mod_delayed_work_on+0x120/0x120
[ 364.060957] ? mutex_lock_nested+0x1b/0x20
[ 364.061508] nvme_mpath_clear_ctrl_paths+0x2f/0xb0
[ 364.062152] nvme_remove_namespaces+0x8a/0x230
[ 364.062746] ? lockdep_hardirqs_on+0x71/0xf0
[ 364.063320] ? nvme_remove_invalid_namespaces+0x240/0x240
[ 364.064046] nvme_do_delete_ctrl+0x6f/0xa6
[ 364.064596] nvme_sysfs_delete.cold.97+0x8/0xd
[ 364.065218] dev_attr_store+0x3f/0x60
[ 364.065715] ? component_bind_all.cold.15+0xb5/0xb5
[ 364.066365] sysfs_kf_write+0x89/0xb0
[ 364.066860] ? sysfs_file_ops+0xa0/0xa0
[ 364.067375] kernfs_fop_write+0x155/0x250
[ 364.067917] __vfs_write+0x50/0xa0
[ 364.068378] vfs_write+0xf9/0x280
[ 364.068855] ksys_write+0xcc/0x170
[ 364.069319] ? __ia32_sys_read+0x50/0x50
[ 364.069850] ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 364.070548] __x64_sys_write+0x43/0x50
[ 364.071054] do_syscall_64+0x60/0xf0
[ 364.071537] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 364.072211] RIP: 0033:0x7f75321d9504
[ 364.072691] Code: Bad RIP value.
[ 364.073151] RSP: 002b:00007ffef4c2c168 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[ 364.074152] RAX: ffffffffffffffda RBX: 0000000000000004 RCX:
00007f75321d9504
[ 364.075155] RDX: 0000000000000001 RSI: 0000558c5cba8e4e RDI:
0000000000000004
[ 364.076094] RBP: 0000000000000000 R08: 0000000000000001 R09:
0000558c5dff0460
[ 364.077060] R10: 0000000000000000 R11: 0000000000000246 R12:
0000558c5dff0500
[ 364.078063] R13: 00007ffef4c2d888 R14: 0000000000000020 R15:
0000000000000003
[ 364.079017]
[ 364.079017] Showing all locks held in the system:
[ 364.079838] 1 lock held by khungtaskd/35:
[ 364.080375] #0: ffffffff97758de0 (rcu_read_lock){....}-{1:2}, at:
debug_show_all_locks+0x33/0x1cb
[ 364.081587] 4 locks held by kworker/u9:1/137:
[ 364.082169] #0: ffff888128b93938
((wq_completion)nvme-wq){+.+.}-{0:0}, at: process_one_work+0x590/0xb70
[ 364.083417] #1: ffff888129d1fde0
((work_completion)(&ctrl->scan_work)){+.+.}-{0:0}, at:
process_one_work+0x590/0xb70
[ 364.084835] #2: ffff888272634550 (&ctrl->scan_lock){+.+.}-{3:3}, at:
nvme_scan_work+0x101/0x3d4
[ 364.085992] #3: ffff888116852600 (&bdev->bd_mutex){+.+.}-{3:3}, at:
__blkdev_get+0x103/0xa50
[ 364.087115] 2 locks held by kworker/1:1H/193:
[ 364.087697] 1 lock held by in:imklog/313:
[ 364.088231] #0: ffff8881209228f0 (&f->f_pos_lock){+.+.}-{3:3}, at:
__fdget_pos+0x7e/0x80
[ 364.089338] 4 locks held by nvme/536:
[ 364.089832] #0: ffff888127312448 (sb_writers#4){.+.+}-{0:0}, at:
vfs_write+0x251/0x280
[ 364.090884] #1: ffff88812270b488 (&of->mutex){+.+.}-{3:3}, at:
kernfs_fop_write+0xf5/0x250
[ 364.091981] #2: ffff888270cd5748 (kn->active#185){++++}-{0:0}, at:
kernfs_remove_self+0x1a5/0x230
[ 364.093181] #3: ffff888272634550 (&ctrl->scan_lock){+.+.}-{3:3}, at:
nvme_mpath_clear_ctrl_paths+0x2f/0xb0
[ 364.094461]
[ 364.094672] =============================================
[ 364.094672]
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
2020-07-23 23:56 ` Logan Gunthorpe
@ 2020-07-24 0:11 ` Sagi Grimberg
2020-07-24 0:26 ` Sagi Grimberg
0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2020-07-24 0:11 UTC (permalink / raw)
To: Logan Gunthorpe, linux-nvme, Christoph Hellwig, Keith Busch
Cc: Anton Eidelman, James Smart
>> Fixes: 0d0b660f214d ("nvme: add ANA support")
>> Reported-by: Anton Eidelman <anton@lightbitslabs.com>
>> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> I just tested nvme-5.9 and, after bisecting, found that this commit is
> hanging the nvme/031 test in blktests[1]. The test just rapidly creates,
> connects and destroys nvmet subsystems. The dmesg trace is below but I
> haven't really dug into root cause.
Thanks for reporting Logan!
The call to nvme_mpath_clear_ctrl_paths was delicate because it had
to do with an effects command coming in to a mpath device during
traffic and also controller reset.
But nothing afaict should prevent the scan_work from flushing before we
call nvme_mpath_clear_ctrl_paths, in fact, it even calls for a race
because the scan_work has the scan_lock taken.
Can you try?
--
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 35c39932c491..ac3fbc4005ad 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4105,6 +4105,9 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
struct nvme_ns *ns, *next;
LIST_HEAD(ns_list);
+ /* prevent racing with ns scanning */
+ flush_work(&ctrl->scan_work);
+
/*
* make sure to requeue I/O to all namespaces as these
* might result from the scan itself and must complete
@@ -4112,9 +4115,6 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
*/
nvme_mpath_clear_ctrl_paths(ctrl);
- /* prevent racing with ns scanning */
- flush_work(&ctrl->scan_work);
-
/*
* The dead states indicates the controller was not gracefully
* disconnected. In that case, we won't be able to flush any
data while
--
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
2020-07-24 0:11 ` Sagi Grimberg
@ 2020-07-24 0:26 ` Sagi Grimberg
2020-07-24 1:03 ` Sagi Grimberg
0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2020-07-24 0:26 UTC (permalink / raw)
To: Logan Gunthorpe, linux-nvme, Christoph Hellwig, Keith Busch
Cc: Anton Eidelman, James Smart
>>> Fixes: 0d0b660f214d ("nvme: add ANA support")
>>> Reported-by: Anton Eidelman <anton@lightbitslabs.com>
>>> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
>> I just tested nvme-5.9 and, after bisecting, found that this commit is
>> hanging the nvme/031 test in blktests[1]. The test just rapidly creates,
>> connects and destroys nvmet subsystems. The dmesg trace is below but I
>> haven't really dug into root cause.
>
> Thanks for reporting Logan!
>
> The call to nvme_mpath_clear_ctrl_paths was delicate because it had
> to do with an effects command coming in to a mpath device during
> traffic and also controller reset.
Actually, I think I'm confusing, the original report was from you Logan.
> But nothing afaict should prevent the scan_work from flushing before we
> call nvme_mpath_clear_ctrl_paths, in fact, it even calls for a race
> because the scan_work has the scan_lock taken.
Actually, I think that the design was to unblock the scan_work and that
is why nvme_mpath_clear_ctrl_paths was placed before (as the comment
say).
But looking at the implementation of nvme_mpath_clear_ctrl_paths, it's
completely unclear why it should take the scan_lock. It is just clearing
the paths..
I think that the correct patch would be to just not take the scan_lock
and only take the namespaces_rwsem.
So a more appropriate patch would be:
--
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 900b35d47ec7..83beffddbc0a 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -156,13 +156,11 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl
*ctrl)
{
struct nvme_ns *ns;
- mutex_lock(&ctrl->scan_lock);
down_read(&ctrl->namespaces_rwsem);
list_for_each_entry(ns, &ctrl->namespaces, list)
if (nvme_mpath_clear_current_path(ns))
kblockd_schedule_work(&ns->head->requeue_work);
up_read(&ctrl->namespaces_rwsem);
- mutex_unlock(&ctrl->scan_lock);
}
--
I'll also prepare a setup and run the test.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
2020-07-24 0:26 ` Sagi Grimberg
@ 2020-07-24 1:03 ` Sagi Grimberg
2020-07-24 16:17 ` Logan Gunthorpe
0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2020-07-24 1:03 UTC (permalink / raw)
To: Logan Gunthorpe, linux-nvme, Christoph Hellwig, Keith Busch
Cc: Anton Eidelman, James Smart
> Actually, I think that the design was to unblock the scan_work and that
> is why nvme_mpath_clear_ctrl_paths was placed before (as the comment
> say).
>
> But looking at the implementation of nvme_mpath_clear_ctrl_paths, it's
> completely unclear why it should take the scan_lock. It is just clearing
> the paths..
>
> I think that the correct patch would be to just not take the scan_lock
> and only take the namespaces_rwsem.
OK, I was able to reproduce this on my setup.
What was needed is that fabrics will allow I/O to pass in
NVME_CTRL_DELETING, which needed this add-on:
--
nvme-fabrics: don't fast fail on ctrl state DELETING
This is now an state that allows for I/O to be sent to the
device, and when the device shall transition into
NVME_CTRL_DELETING_NOIO we shall fail the I/O.
Note that this is fine because the transport itself has
a queue state to protect against queue access.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index a0ec40ab62ee..a9c1e3b4585e 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -182,7 +182,8 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct
request *rq,
bool queue_live)
{
- if (likely(ctrl->state == NVME_CTRL_LIVE))
+ if (likely(ctrl->state == NVME_CTRL_LIVE ||
+ ctrl->state == NVME_CTRL_DELETING))
return true;
return __nvmf_check_ready(ctrl, rq, queue_live);
}
--
Logan,
Can you verify that it works for you?
BTW, I'm still seriously suspicious on why nvme_mpath_clear_ctrl_paths
is taking the scan_lock. It appears that it shouldn't. I'm tempted to
remove it and see if anyone complains...
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work
2020-07-24 1:03 ` Sagi Grimberg
@ 2020-07-24 16:17 ` Logan Gunthorpe
0 siblings, 0 replies; 9+ messages in thread
From: Logan Gunthorpe @ 2020-07-24 16:17 UTC (permalink / raw)
To: Sagi Grimberg, linux-nvme, Christoph Hellwig, Keith Busch
Cc: Anton Eidelman, James Smart
On 2020-07-23 7:03 p.m., Sagi Grimberg wrote:
>
>> Actually, I think that the design was to unblock the scan_work and that
>> is why nvme_mpath_clear_ctrl_paths was placed before (as the comment
>> say).
>>
>> But looking at the implementation of nvme_mpath_clear_ctrl_paths, it's
>> completely unclear why it should take the scan_lock. It is just clearing
>> the paths..
>>
>> I think that the correct patch would be to just not take the scan_lock
>> and only take the namespaces_rwsem.
>
> OK, I was able to reproduce this on my setup.
>
> What was needed is that fabrics will allow I/O to pass in
> NVME_CTRL_DELETING, which needed this add-on:
> --
> nvme-fabrics: don't fast fail on ctrl state DELETING
>
> This is now an state that allows for I/O to be sent to the
> device, and when the device shall transition into
> NVME_CTRL_DELETING_NOIO we shall fail the I/O.
>
> Note that this is fine because the transport itself has
> a queue state to protect against queue access.
>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
>
> diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
> index a0ec40ab62ee..a9c1e3b4585e 100644
> --- a/drivers/nvme/host/fabrics.h
> +++ b/drivers/nvme/host/fabrics.h
> @@ -182,7 +182,8 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
> static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct
> request *rq,
> bool queue_live)
> {
> - if (likely(ctrl->state == NVME_CTRL_LIVE))
> + if (likely(ctrl->state == NVME_CTRL_LIVE ||
> + ctrl->state == NVME_CTRL_DELETING))
> return true;
> return __nvmf_check_ready(ctrl, rq, queue_live);
> }
> --
>
> Logan,
>
> Can you verify that it works for you?
Yes, thanks, this fixes the issue for me.
> BTW, I'm still seriously suspicious on why nvme_mpath_clear_ctrl_paths
> is taking the scan_lock. It appears that it shouldn't. I'm tempted to
> remove it and see if anyone complains...
Not really sure myself, but I did a cursory look and don't see any
obvious reason why scan_lock needs to be taken there.
Logan
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-07-24 16:18 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-22 23:32 [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 1/2] nvme: document nvme controller states Sagi Grimberg
2020-07-22 23:32 ` [PATCH v3 2/2] nvme-core: fix deadlock in disconnect during scan_work and/or ana_work Sagi Grimberg
2020-07-23 23:56 ` Logan Gunthorpe
2020-07-24 0:11 ` Sagi Grimberg
2020-07-24 0:26 ` Sagi Grimberg
2020-07-24 1:03 ` Sagi Grimberg
2020-07-24 16:17 ` Logan Gunthorpe
2020-07-23 9:02 ` [PATCH v3 0/2] resolve controller delete hang due to ongoing mpath I/O Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).