linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* nvme-tcp bricks my computer
@ 2021-02-01 19:04 Belanger, Martin
  2021-02-03  8:05 ` Sagi Grimberg
  0 siblings, 1 reply; 11+ messages in thread
From: Belanger, Martin @ 2021-02-01 19:04 UTC (permalink / raw)
  To: linux-nvme

I'm running "nvme discover" over a TCP connection. The nvme-tcp module freezes completely and bricks my computer.

Steps:
$ sudo modprobe nvme-tcp
$ sudo nvme discover -t tcp -a [IP address] -s 8009
<System Bricked!>
Only a reboot (Alt-SysRq-B) can recover the system. 

Conditions to reproduce the problem:
The Discovery Controller must support sending Discovery Log Change Notifications. That is, bit 31 of the Identity's OAES field returned by the discovery controller must be set to 1. If OAES[31]=0, then everything is OK.

Systems tested:
1) Ubuntu 20.04, Linux 5.8, nvme 1.13.21
2) Fedora 33, Linux 5.10, nvme 1.11.1

Regards,
Martin
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
  2021-02-01 19:04 nvme-tcp bricks my computer Belanger, Martin
@ 2021-02-03  8:05 ` Sagi Grimberg
       [not found]   ` <SJ0PR19MB4544B86790800C6371E7F96AF2B49@SJ0PR19MB4544.namprd19.prod.outlook.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-03  8:05 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


> I'm running "nvme discover" over a TCP connection. The nvme-tcp module freezes completely and bricks my computer.
> 
> Steps:
> $ sudo modprobe nvme-tcp
> $ sudo nvme discover -t tcp -a [IP address] -s 8009
> <System Bricked!>
> Only a reboot (Alt-SysRq-B) can recover the system.

Do you have a stack trace to share?

> Conditions to reproduce the problem:
> The Discovery Controller must support sending Discovery Log Change Notifications. That is, bit 31 of the Identity's OAES field returned by the discovery controller must be set to 1. If OAES[31]=0, then everything is OK.

What is the discovery log page returned by the nvme discovery 
controller? Does it include referrals? There was an issue fixed
in nvme-cli with respect to referrals (although nothing that is
related to any oaes changes).

> Systems tested:
> 1) Ubuntu 20.04, Linux 5.8, nvme 1.13.21
> 2) Fedora 33, Linux 5.10, nvme 1.11.1

Are these the default kernels that come with the distribution?
Does this happen with the latest upstream?

I know people are doing this for a long time now and no reports
came in on this phenomenon.. I'm assuming this is not Linux nvmet
target correct?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
       [not found]   ` <SJ0PR19MB4544B86790800C6371E7F96AF2B49@SJ0PR19MB4544.namprd19.prod.outlook.com>
@ 2021-02-03 22:36     ` Sagi Grimberg
  2021-02-03 22:56       ` Sagi Grimberg
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-03 22:36 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


>> I'm running "nvme discover" over a TCP connection. The nvme-tcp module freezes completely and bricks my computer.
>> 
>> Steps:
>> $ sudo modprobe nvme-tcp
>> $ sudo nvme discover -t tcp -a [IP address] -s 8009
>> <System Bricked!>
>> Only a reboot (Alt-SysRq-B) can recover the system.
> 
> Do you have a stack trace to share?
> *<MB> No. I'm not able to collect data since my computer freezes 
> completely. I had a window opened that was showing the syslog at the 
> time the freeze occurred. I took a couple of pictures (attached). *

OK, Are you able to scroll up a bit to see the RIP line in the
stack trace? and if this is a NULL dereference or something else?

>> Conditions to reproduce the problem:
>> The Discovery Controller must support sending Discovery Log Change Notifications. That is, bit 31 of the Identity's OAES field returned by the discovery controller must be set to 1. If OAES[31]=0, then everything is OK.
> 
> What is the discovery log page returned by the nvme discovery
> controller? Does it include referrals? There was an issue fixed
> in nvme-cli with respect to referrals (although nothing that is
> related to any oaes changes).
> *<MB> The driver never makes it to asking for the discovery log page. It 
> freezes as soon as it receives the "Identity" message. *

You mean identify controller? That is strange because not sure it should
be any difference..

>> Systems tested:
>> 1) Ubuntu 20.04, Linux 5.8, nvme 1.13.21
>> 2) Fedora 33, Linux 5.10, nvme 1.11.1
> 
> Are these the default kernels that come with the distribution?
> *<MB> Yes.**
> *
> *On Ubuntu 20.04: *
> *$ uname -rsvpi*
> *Linux 5.8.0-41-generic #46~20.04.1-Ubuntu SMP Mon Jan 18 17:52:23 UTC 
> 2021 x86_64 x86_64
> *
> *
> *
> *On Fedora 33:*
> *$ uname -rsvpi*
> *Linux 5.10.11-200.fc33.x86_64 #1 SMP Wed Jan 27 20:21:22 UTC 2021 
> x86_64 x86_64
> *
> 
> Does this happen with the latest upstream?
> *<MB> I did compile the latest upstream kernel modules, but 
> Ubuntu/Fedora won't let me modprobe them. I'm not a kernel expert. There 
> seems to be some security in place that prevents one from loading a 
> kernel module that did not come with the official release. I tried 
> several things suggested on Google to work around this but could never 
> get the latest kernel modules loaded. By the way, Fedora 33 is pretty 
> close to the latest upstream (i.e. Linux 5.10) and I see the same issue 
> there.*

We definitely didn't get such a bug report on this kernel.

Does this happen if you directly connect to a normal nvme controller?

> *<MB> As I said earlier, everything works fine until I change the 
> Discovery Controller to return **OAES[31]=1 in the Identity message. 
>  From what I see in the nvme-tcp code, this tells the driver to enable 
> AER/AEN. I think that's where the issue is. Since I'm not a kernel 
> expert, I cannot diagnose the problem further than that.*

It appears that without discovery log change events we never submit
async event, so it must be something there.

Can you share your kernel config file?

> I'm assuming this is not Linux nvmet target correct?
> *<MB> I don't know what that means: nvmet? *

What is your target implementation? Is this the nvme target that
is built into Linux?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
  2021-02-03 22:36     ` Sagi Grimberg
@ 2021-02-03 22:56       ` Sagi Grimberg
       [not found]         ` <SJ0PR19MB4544A3DF2B1B2A1EEB8D1B41F2B39@SJ0PR19MB4544.namprd19.prod.outlook.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-03 22:56 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme

Martin,

I can actually see a RIP line,

Can you run gdb on vmlinux and share what this provides?

l *(nvme_tcp_init_iter+0x60)

This is strange that we even get there, because we only
initialize a request iter if we have data, and this shouldn't
be the case for the async request.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
       [not found]         ` <SJ0PR19MB4544A3DF2B1B2A1EEB8D1B41F2B39@SJ0PR19MB4544.namprd19.prod.outlook.com>
@ 2021-02-04  7:43           ` Sagi Grimberg
  2021-02-05 18:49             ` Sagi Grimberg
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-04  7:43 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


> Hi Sagi,
> 
> I was able to capture a bit more info. Again, because my computer 
> freezes (i.e. no keyboard or mouse response) I could only take a picture 
> of the screen (attached). This does say NULL pointer dereference.
> 
> To answer your questions:
> 
> Q) Does this happen if you directly connect to a normal nvme controller?
> A) I still need to try this. All I know is that it doesn't happen when 
> the controller returns OAES[31]=0.
> 
> Q) What is your target implementation? Is this the nvme target thatis 
> built into Linux?
> A) I work for Dell, and I'm on the same team as Douglas Farley and Erik 
> Smith, whom I believe you've had discussions with. Dell is currently 
> developing a Central Discovery Controller (CDC). That's the target I've 
> been testing with. I need to find different targets I can test with but 
> working from home makes it a bit difficult. I will check with my team 
> tomorrow to see if I can connect to a different target.
> 
> Q) Can you share your kernel config file?
> Q) Can you run gdb on vmlinux and share what this provides?
> A) I'll get that tomorrow (both questions).

For the record, I compiled Fedora 33 kernel and ran it on my VM
and it doesn't happen. I also modified Linux nvme target to
report the same limits as your target and still it doesn't happen.
--
[  162.006049] nvme nvme0: Failed to read smart log (error 24577)
[  162.006060] nvme nvme0: queue_size 128 > ctrl sqsize 32, clamping down
[  162.006063] nvme nvme0: sqsize 32 > ctrl maxcmd 31, clamping down
[  162.006778] nvme nvme0: new ctrl: NQN 
"nqn.2014-08.org.nvmexpress.discovery", addr 192.168.123.1:8009
[  162.008939] nvme nvme0: Removing ctrl: NQN 
"nqn.2014-08.org.nvmexpress.discovery"
--

So something here seems to be specific to your env.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
  2021-02-04  7:43           ` Sagi Grimberg
@ 2021-02-05 18:49             ` Sagi Grimberg
       [not found]               ` <SJ0PR19MB45442A0B8E171B55A8AACACEF2B29@SJ0PR19MB4544.namprd19.prod.outlook.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-05 18:49 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


>> Hi Sagi,
>>
>> I was able to capture a bit more info. Again, because my computer 
>> freezes (i.e. no keyboard or mouse response) I could only take a 
>> picture of the screen (attached). This does say NULL pointer dereference.
>>
>> To answer your questions:
>>
>> Q) Does this happen if you directly connect to a normal nvme controller?
>> A) I still need to try this. All I know is that it doesn't happen when 
>> the controller returns OAES[31]=0.
>>
>> Q) What is your target implementation? Is this the nvme target thatis 
>> built into Linux?
>> A) I work for Dell, and I'm on the same team as Douglas Farley and 
>> Erik Smith, whom I believe you've had discussions with. Dell is 
>> currently developing a Central Discovery Controller (CDC). That's the 
>> target I've been testing with. I need to find different targets I can 
>> test with but working from home makes it a bit difficult. I will check 
>> with my team tomorrow to see if I can connect to a different target.
>>
>> Q) Can you share your kernel config file?
>> Q) Can you run gdb on vmlinux and share what this provides?
>> A) I'll get that tomorrow (both questions).
> 
> For the record, I compiled Fedora 33 kernel and ran it on my VM
> and it doesn't happen. I also modified Linux nvme target to
> report the same limits as your target and still it doesn't happen.
> -- 
> [  162.006049] nvme nvme0: Failed to read smart log (error 24577)
> [  162.006060] nvme nvme0: queue_size 128 > ctrl sqsize 32, clamping down
> [  162.006063] nvme nvme0: sqsize 32 > ctrl maxcmd 31, clamping down
> [  162.006778] nvme nvme0: new ctrl: NQN 
> "nqn.2014-08.org.nvmexpress.discovery", addr 192.168.123.1:8009
> [  162.008939] nvme nvme0: Removing ctrl: NQN 
> "nqn.2014-08.org.nvmexpress.discovery"
> -- 
> 
> So something here seems to be specific to your env.

Hey Martin, any updates with this?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
       [not found]               ` <SJ0PR19MB45442A0B8E171B55A8AACACEF2B29@SJ0PR19MB4544.namprd19.prod.outlook.com>
@ 2021-02-05 19:38                 ` Sagi Grimberg
       [not found]                   ` <SJ0PR19MB4544A653D3A84A19A28D3984F28E9@SJ0PR19MB4544.namprd19.prod.outlook.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-05 19:38 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


> Hi Sagi,
> 
> I tried with nvmet-tcp and, just like you, I don't see a problem. So now 
> I'm doing packet captures and comparing the differences between nvmet 
> and the new cdc we're working on. I'm also asking the cdc developers for 
> their help. I will let you know when I can clearlyidentify which 
> packets/bytes/bits are causing nvme-tcp to crash.

Thanks Martin, obviously we have an issue because no matter what the
target does, it should not trigger a host crash.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
       [not found]                   ` <SJ0PR19MB4544A653D3A84A19A28D3984F28E9@SJ0PR19MB4544.namprd19.prod.outlook.com>
@ 2021-02-10 22:33                     ` Sagi Grimberg
       [not found]                       ` <SJ0PR19MB4544CAB0A539F84E54EC5208F2889@SJ0PR19MB4544.namprd19.prod.outlook.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-10 22:33 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


> Hi Sagi,
> 
> I was finally able to get back to the crash issue.
> 
> Using Wireshark, I compared the PDUs from the nvmet to our home-brewed 
> Central Discovery Controller (CDC). I did not see any major differences 
> in the data itself. However, there is one significant difference in the 
> way that the nvmet and CDC indicate that a command has completed 
> successfully.
> 
> The NVMe-oF spec describes two ways that a Controller can indicate to 
> the Host that a command has completed successfully. One way is to send a 
> Response Capsule with the "status" set to 0. Another way is to set the 
> SUCCESS bit to 1 in the last C2HData PDU. This approach eliminates the 
> need for a Response Capsule PDU. Ref. NVMe-oF specs, section 7.4.5.2, 
> 5th paragraph.
> 
> The CDC sets the last C2HData's SUCCESS bit to 1 instead of sending a 
> Response Capsule. nvmet sets SUCCESS=0 and sends a separate Response 
> Capsule. Could this be the cause of the crash?

Don't think so, but did the host ask for this in the connect?
The host should explicitly ask for it when connecting. With nvme-cli
when connecting one should specify disable_sqflow (this optimization
is only possible if sq flow control is explicitly disabled).

In any event, even if the controller is misbehaving, it shouldn't crash
the host. I'll look into that.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
       [not found]                       ` <SJ0PR19MB4544CAB0A539F84E54EC5208F2889@SJ0PR19MB4544.namprd19.prod.outlook.com>
@ 2021-02-15 21:35                         ` Sagi Grimberg
  2021-02-15 21:42                           ` Sagi Grimberg
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-15 21:35 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme

> Hi Sagi,

Hey,

> Just to give you an update...
> 
> We're still investigating the root cause of the crash.
> 
> We found a bug in our Discovery Controller related to SGL format (format 
> 0x5A vs. 0x01). When the host sends a "Set Feature" to configure AER/AEN 
> with a SGL format of 0x5A,

This is coming from:
--
static void nvme_tcp_set_sg_null(struct nvme_command *c)
{
         struct nvme_sgl_desc *sg = &c->common.dptr.sgl;

         sg->addr = 0;
         sg->length = 0;
         sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
                         NVME_SGL_FMT_TRANSPORT_A;
}
--

  the DC responds with an R2T, which is
> obviously a bug. This does not happen when the SGL format is 0x01. We 
> believe that this R2T, because it is unexpected by the nvme-tcp module, 
> causes the module to crash.

I'm assuming because the R2T has data length of 0? because set_features
does not pass any data (feature offset/value is in the sqe)...

> One of our engineers that is more familiar with kernel modules is 
> currently trying to understand how the R2T would cause nvme-tcp to crash.
> 
> I will let you know if/when I get more info.

Cool, thanks.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
  2021-02-15 21:35                         ` Sagi Grimberg
@ 2021-02-15 21:42                           ` Sagi Grimberg
       [not found]                             ` <SJ0PR19MB45441A133999DA90186BCC11F2869@SJ0PR19MB4544.namprd19.prod.outlook.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-15 21:42 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme

>> Hi Sagi,
> 
> Hey,
> 
>> Just to give you an update...
>>
>> We're still investigating the root cause of the crash.
>>
>> We found a bug in our Discovery Controller related to SGL format 
>> (format 0x5A vs. 0x01). When the host sends a "Set Feature" to 
>> configure AER/AEN with a SGL format of 0x5A,
> 
> This is coming from:
> -- 
> static void nvme_tcp_set_sg_null(struct nvme_command *c)
> {
>          struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> 
>          sg->addr = 0;
>          sg->length = 0;
>          sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
>                          NVME_SGL_FMT_TRANSPORT_A;
> }
> -- 
> 
>   the DC responds with an R2T, which is
>> obviously a bug. This does not happen when the SGL format is 0x01. We 
>> believe that this R2T, because it is unexpected by the nvme-tcp 
>> module, causes the module to crash.
> 
> I'm assuming because the R2T has data length of 0? because set_features
> does not pass any data (feature offset/value is in the sqe)...
> 
>> One of our engineers that is more familiar with kernel modules is 
>> currently trying to understand how the R2T would cause nvme-tcp to crash.
>>
>> I will let you know if/when I get more info.
> 
> Cool, thanks.

Does this make the crash go away at least?
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 69f59d2c5799..5274cc5800f9 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -568,6 +568,13 @@ static int nvme_tcp_setup_h2c_data_pdu(struct 
nvme_tcp_request *req,
         req->pdu_len = le32_to_cpu(pdu->r2t_length);
         req->pdu_sent = 0;

+       if (unlikely(!req->pdu_len)) {
+               dev_err(queue->ctrl->ctrl.device,
+                       "req %d r2t len is %u, probably a bug...\n",
+                       rq->tag, req->pdu_len);
+               return -EPROTO;
+       }
+
         if (unlikely(req->data_sent + req->pdu_len > req->data_len)) {
                 dev_err(queue->ctrl->ctrl.device,
                         "req %d r2t len %u exceeded data len %u (%zu 
sent)\n",
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index ac2d9ed23cea..d82df6cca801 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: nvme-tcp bricks my computer
       [not found]                             ` <SJ0PR19MB45441A133999DA90186BCC11F2869@SJ0PR19MB4544.namprd19.prod.outlook.com>
@ 2021-02-17 23:26                               ` Sagi Grimberg
  0 siblings, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2021-02-17 23:26 UTC (permalink / raw)
  To: Belanger, Martin, linux-nvme


> Hi Sagi,
> 
> Our kernel expert was able to try the patch you provided. This is his 
> comment:
> 
>     /[the patch] fixes the panic, but it’ll take the controller down so
>     it's unsable.  If we want it to ignore the error, it will need to be
>     a little different./

Thanks for verifying it fixes the crash, not sure we want to ignore this
error, while its not prohibited by the spec to send a 0-length R2T
(although it probably should be), the controller is obviously
buggy and we should probably stop talking to it..

I'll send a formal patch, thanks!

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-02-17 23:27 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-01 19:04 nvme-tcp bricks my computer Belanger, Martin
2021-02-03  8:05 ` Sagi Grimberg
     [not found]   ` <SJ0PR19MB4544B86790800C6371E7F96AF2B49@SJ0PR19MB4544.namprd19.prod.outlook.com>
2021-02-03 22:36     ` Sagi Grimberg
2021-02-03 22:56       ` Sagi Grimberg
     [not found]         ` <SJ0PR19MB4544A3DF2B1B2A1EEB8D1B41F2B39@SJ0PR19MB4544.namprd19.prod.outlook.com>
2021-02-04  7:43           ` Sagi Grimberg
2021-02-05 18:49             ` Sagi Grimberg
     [not found]               ` <SJ0PR19MB45442A0B8E171B55A8AACACEF2B29@SJ0PR19MB4544.namprd19.prod.outlook.com>
2021-02-05 19:38                 ` Sagi Grimberg
     [not found]                   ` <SJ0PR19MB4544A653D3A84A19A28D3984F28E9@SJ0PR19MB4544.namprd19.prod.outlook.com>
2021-02-10 22:33                     ` Sagi Grimberg
     [not found]                       ` <SJ0PR19MB4544CAB0A539F84E54EC5208F2889@SJ0PR19MB4544.namprd19.prod.outlook.com>
2021-02-15 21:35                         ` Sagi Grimberg
2021-02-15 21:42                           ` Sagi Grimberg
     [not found]                             ` <SJ0PR19MB45441A133999DA90186BCC11F2869@SJ0PR19MB4544.namprd19.prod.outlook.com>
2021-02-17 23:26                               ` Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).