linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer
@ 2016-06-01  4:58 Baolin Wang
  2016-06-01  4:58 ` [RFC v3 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Baolin Wang @ 2016-06-01  4:58 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

This patchset will check if the cipher can support bulk mode, then dm-crypt
will handle different ways to send requests to crypto layer according to
cipher mode. For bulk mode, we can use sg table to map the whole bio and
send all scatterlists of one bio to crypto engine to encrypt or decrypt,
which can improve the hardware engine's efficiency.

Changes since v2:
 - Add one cipher user with CRYPTO_ALG_BULK flag to support bulk mode.
 - Add one atomic variable to avoid the sg table race.

Changes since v1:
 - Refactor the blk_bio_map_sg() function to avoid duplicated code.
 - Move the sg table allocation to crypt_ctr_cipher() function to avoid memory
   allocation in the IO path.
 - Remove the crypt_sg_entry() function.
 - Other optimization.

Baolin Wang (4):
  block: Introduce blk_bio_map_sg() to map one bio
  crypto: Introduce CRYPTO_ALG_BULK flag
  md: dm-crypt: Introduce the bulk mode method when sending request
  crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher

 block/blk-merge.c         |   36 ++++++++--
 drivers/crypto/omap-aes.c |    2 +-
 drivers/md/dm-crypt.c     |  159 ++++++++++++++++++++++++++++++++++++++++++++-
 include/crypto/skcipher.h |    7 ++
 include/linux/blkdev.h    |    2 +
 include/linux/crypto.h    |    6 ++
 6 files changed, 205 insertions(+), 7 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC v3 1/4] block: Introduce blk_bio_map_sg() to map one bio
  2016-06-01  4:58 [RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
@ 2016-06-01  4:58 ` Baolin Wang
  2016-06-01  4:58 ` [RFC v3 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Baolin Wang @ 2016-06-01  4:58 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.

For avoiding the duplicated code in __blk_bios_map_sg() function, add
one parameter to distinguish bio map or request map.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 block/blk-merge.c      |   36 +++++++++++++++++++++++++++++++-----
 include/linux/blkdev.h |    2 ++
 2 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2613531..badae44 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -376,7 +376,7 @@ new_segment:
 
 static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 			     struct scatterlist *sglist,
-			     struct scatterlist **sg)
+			     struct scatterlist **sg, bool single_bio)
 {
 	struct bio_vec bvec, bvprv = { NULL };
 	struct bvec_iter iter;
@@ -408,13 +408,39 @@ single_segment:
 		return 1;
 	}
 
-	for_each_bio(bio)
+	if (!single_bio) {
+		for_each_bio(bio)
+			bio_for_each_segment(bvec, bio, iter)
+				__blk_segment_map_sg(q, &bvec, sglist, &bvprv,
+						     sg, &nsegs, &cluster);
+	} else {
 		bio_for_each_segment(bvec, bio, iter)
-			__blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
-					     &nsegs, &cluster);
+			__blk_segment_map_sg(q, &bvec, sglist, &bvprv,
+					     sg, &nsegs, &cluster);
+	}
+
+	return nsegs;
+}
+
+/*
+ * Map a bio to scatterlist, return number of sg entries setup. Caller must
+ * make sure sg can hold bio segments entries.
+ */
+int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+		   struct scatterlist *sglist)
+{
+	struct scatterlist *sg = NULL;
+	int nsegs = 0;
+
+	if (bio)
+		nsegs = __blk_bios_map_sg(q, bio, sglist, &sg, true);
+
+	if (sg)
+		sg_mark_end(sg);
 
 	return nsegs;
 }
+EXPORT_SYMBOL(blk_bio_map_sg);
 
 /*
  * map a request to scatterlist, return number of sg entries setup. Caller
@@ -427,7 +453,7 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
 	int nsegs = 0;
 
 	if (rq->bio)
-		nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg);
+		nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg, false);
 
 	if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
 	    (blk_rq_bytes(rq) & q->dma_pad_mask)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1fd8fdf..5868062 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1013,6 +1013,8 @@ extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fu
 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev);
 
 extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *);
+extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+			  struct scatterlist *sglist);
 extern void blk_dump_rq_flags(struct request *, char *);
 extern long nr_blockdev_pages(void);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC v3 2/4] crypto: Introduce CRYPTO_ALG_BULK flag
  2016-06-01  4:58 [RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
  2016-06-01  4:58 ` [RFC v3 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
@ 2016-06-01  4:58 ` Baolin Wang
  2016-06-01  4:58 ` [RFC v3 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
  2016-06-01  4:58 ` [RFC v3 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher Baolin Wang
  3 siblings, 0 replies; 5+ messages in thread
From: Baolin Wang @ 2016-06-01  4:58 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

Now some cipher hardware engines prefer to handle bulk block rather than one
sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
the intermediate values (IV) by themselves in one bulk block. This means we
can increase the size of the request by merging request rather than always 512
bytes and thus increase the hardware engine processing speed.

So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 include/crypto/skcipher.h |    7 +++++++
 include/linux/crypto.h    |    6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 0f987f5..d89d29a 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -519,5 +519,12 @@ static inline void skcipher_request_set_crypt(
 	req->iv = iv;
 }
 
+static inline unsigned int skcipher_is_bulk_mode(struct crypto_skcipher *sk_tfm)
+{
+	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
+
+	return crypto_tfm_alg_bulk(tfm);
+}
+
 #endif	/* _CRYPTO_SKCIPHER_H */
 
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 6e28c89..a315487 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -63,6 +63,7 @@
 #define CRYPTO_ALG_DEAD			0x00000020
 #define CRYPTO_ALG_DYING		0x00000040
 #define CRYPTO_ALG_ASYNC		0x00000080
+#define CRYPTO_ALG_BULK			0x00000100
 
 /*
  * Set this bit if and only if the algorithm requires another algorithm of
@@ -623,6 +624,11 @@ static inline u32 crypto_tfm_alg_type(struct crypto_tfm *tfm)
 	return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
 }
 
+static inline unsigned int crypto_tfm_alg_bulk(struct crypto_tfm *tfm)
+{
+	return tfm->__crt_alg->cra_flags & CRYPTO_ALG_BULK;
+}
+
 static inline unsigned int crypto_tfm_alg_blocksize(struct crypto_tfm *tfm)
 {
 	return tfm->__crt_alg->cra_blocksize;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC v3 3/4] md: dm-crypt: Introduce the bulk mode method when sending request
  2016-06-01  4:58 [RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
  2016-06-01  4:58 ` [RFC v3 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
  2016-06-01  4:58 ` [RFC v3 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
@ 2016-06-01  4:58 ` Baolin Wang
  2016-06-01  4:58 ` [RFC v3 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher Baolin Wang
  3 siblings, 0 replies; 5+ messages in thread
From: Baolin Wang @ 2016-06-01  4:58 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

In now dm-crypt code, it is ineffective to map one segment (always one
sector) of one bio with just only one scatterlist at one time for hardware
crypto engine. Especially for some encryption mode (like ecb or xts mode)
cooperating with the crypto engine, they just need one initial IV or null
IV instead of different IV for each sector. In this situation We can consider
to use multiple scatterlists to map the whole bio and send all scatterlists
of one bio to crypto engine to encrypt or decrypt, which can improve the
hardware engine's efficiency.

With this optimization, On my test setup (beaglebone black board and dd testing)
using 64KB I/Os on an eMMC storage device I saw about 127% improvement in
throughput for encrypted writes, and about 206% improvement for encrypted reads.
But this is not fit for other modes which need different IV for each sector.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 drivers/md/dm-crypt.c |  159 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 158 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 4f3cb35..0b1d452 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -33,6 +33,7 @@
 #include <linux/device-mapper.h>
 
 #define DM_MSG_PREFIX "crypt"
+#define DM_MAX_SG_LIST	512
 
 /*
  * context holding the current state of a multi-part conversion
@@ -142,6 +143,11 @@ struct crypt_config {
 	char *cipher;
 	char *cipher_string;
 
+	struct sg_table sgt_in;
+	struct sg_table sgt_out;
+	atomic_t sgt_init_done;
+	struct completion sgt_restart;
+
 	struct crypt_iv_operations *iv_gen_ops;
 	union {
 		struct iv_essiv_private essiv;
@@ -837,6 +843,141 @@ static u8 *iv_of_dmreq(struct crypt_config *cc,
 		crypto_skcipher_alignmask(any_tfm(cc)) + 1);
 }
 
+static void crypt_init_sg_table(struct scatterlist *sgl)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+		if (i < DM_MAX_SG_LIST - 1 && sg_is_last(sg))
+			sg_unmark_end(sg);
+		else if (i == DM_MAX_SG_LIST - 1)
+			sg_mark_end(sg);
+	}
+
+	for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+		memset(sg, 0, sizeof(struct scatterlist));
+
+		if (i == DM_MAX_SG_LIST - 1)
+			sg_mark_end(sg);
+	}
+}
+
+static void crypt_reinit_sg_table(struct crypt_config *cc)
+{
+	if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+		return;
+
+	crypt_init_sg_table(cc->sgt_in.sgl);
+	crypt_init_sg_table(cc->sgt_out.sgl);
+
+	if (atomic_inc_and_test(&cc->sgt_init_done))
+		complete(&cc->sgt_restart);
+	atomic_set(&cc->sgt_init_done, 1);
+}
+
+static int crypt_alloc_sg_table(struct crypt_config *cc)
+{
+	unsigned int bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+	int ret = 0;
+
+	if (!bulk_mode)
+		goto out_skip_alloc;
+
+	ret = sg_alloc_table(&cc->sgt_in, DM_MAX_SG_LIST, GFP_KERNEL);
+	if (ret)
+		goto out_skip_alloc;
+
+	ret = sg_alloc_table(&cc->sgt_out, DM_MAX_SG_LIST, GFP_KERNEL);
+	if (ret)
+		goto out_free_table;
+
+	init_completion(&cc->sgt_restart);
+	atomic_set(&cc->sgt_init_done, 1);
+	return 0;
+
+out_free_table:
+	sg_free_table(&cc->sgt_in);
+out_skip_alloc:
+	cc->sgt_in.orig_nents = 0;
+	cc->sgt_out.orig_nents = 0;
+
+	return ret;
+}
+
+static int crypt_convert_bulk_block(struct crypt_config *cc,
+				    struct convert_context *ctx,
+				    struct skcipher_request *req)
+{
+	struct bio *bio_in = ctx->bio_in;
+	struct bio *bio_out = ctx->bio_out;
+	unsigned int total_bytes = bio_in->bi_iter.bi_size;
+	unsigned int total_sg_in, total_sg_out;
+	struct scatterlist *sg_in, *sg_out;
+	struct dm_crypt_request *dmreq;
+	u8 *iv;
+	int r;
+
+	if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+		return -EINVAL;
+
+	if (!atomic_dec_and_test(&cc->sgt_init_done)) {
+		wait_for_completion(&cc->sgt_restart);
+		reinit_completion(&cc->sgt_restart);
+	}
+
+	dmreq = dmreq_of_req(cc, req);
+	iv = iv_of_dmreq(cc, dmreq);
+	dmreq->iv_sector = ctx->cc_sector;
+	dmreq->ctx = ctx;
+
+	total_sg_in = blk_bio_map_sg(bdev_get_queue(bio_in->bi_bdev),
+				     bio_in, cc->sgt_in.sgl);
+	if ((total_sg_in <= 0) || (total_sg_in > DM_MAX_SG_LIST)) {
+		DMERR("%s in sg map error %d, sg table nents[%d]\n",
+		      __func__, total_sg_in, cc->sgt_in.orig_nents);
+		return -EINVAL;
+	}
+
+	ctx->iter_in.bi_size -= total_bytes;
+	sg_in = cc->sgt_in.sgl;
+	sg_out = cc->sgt_in.sgl;
+
+	if (bio_data_dir(bio_in) == READ)
+		goto set_crypt;
+
+	total_sg_out = blk_bio_map_sg(bdev_get_queue(bio_out->bi_bdev),
+				      bio_out, cc->sgt_out.sgl);
+	if ((total_sg_out <= 0) || (total_sg_out > DM_MAX_SG_LIST)) {
+		DMERR("%s out sg map error %d, sg table nents[%d]\n",
+		      __func__, total_sg_out, cc->sgt_out.orig_nents);
+		return -EINVAL;
+	}
+
+	ctx->iter_out.bi_size -= total_bytes;
+	sg_out = cc->sgt_out.sgl;
+
+set_crypt:
+	if (cc->iv_gen_ops) {
+		r = cc->iv_gen_ops->generator(cc, iv, dmreq);
+		if (r < 0)
+			return r;
+	}
+
+	atomic_set(&cc->sgt_init_done, 0);
+	skcipher_request_set_crypt(req, sg_in, sg_out, total_bytes, iv);
+
+	if (bio_data_dir(ctx->bio_in) == WRITE)
+		r = crypto_skcipher_encrypt(req);
+	else
+		r = crypto_skcipher_decrypt(req);
+
+	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
+		r = cc->iv_gen_ops->post(cc, iv, dmreq);
+
+	return r;
+}
+
 static int crypt_convert_block(struct crypt_config *cc,
 			       struct convert_context *ctx,
 			       struct skcipher_request *req)
@@ -920,6 +1061,7 @@ static void crypt_free_req(struct crypt_config *cc,
 static int crypt_convert(struct crypt_config *cc,
 			 struct convert_context *ctx)
 {
+	unsigned int bulk_mode;
 	int r;
 
 	atomic_set(&ctx->cc_pending, 1);
@@ -930,7 +1072,14 @@ static int crypt_convert(struct crypt_config *cc,
 
 		atomic_inc(&ctx->cc_pending);
 
-		r = crypt_convert_block(cc, ctx, ctx->req);
+		bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+		if (!bulk_mode) {
+			r = crypt_convert_block(cc, ctx, ctx->req);
+		} else {
+			r = crypt_convert_bulk_block(cc, ctx, ctx->req);
+			if (r == -EINVAL)
+				r = crypt_convert_block(cc, ctx, ctx->req);
+		}
 
 		switch (r) {
 		/*
@@ -1081,6 +1230,7 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
 	if (io->ctx.req)
 		crypt_free_req(cc, io->ctx.req, base_bio);
 
+	crypt_reinit_sg_table(cc);
 	base_bio->bi_error = error;
 	bio_endio(base_bio);
 }
@@ -1563,6 +1713,9 @@ static void crypt_dtr(struct dm_target *ti)
 	kzfree(cc->cipher);
 	kzfree(cc->cipher_string);
 
+	sg_free_table(&cc->sgt_in);
+	sg_free_table(&cc->sgt_out);
+
 	/* Must zero key material before freeing */
 	kzfree(cc);
 }
@@ -1718,6 +1871,10 @@ static int crypt_ctr_cipher(struct dm_target *ti,
 		}
 	}
 
+	ret = crypt_alloc_sg_table(cc);
+	if (ret)
+		DMWARN("Allocate sg table for bulk mode failed");
+
 	ret = 0;
 bad:
 	kfree(cipher_api);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC v3 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher
  2016-06-01  4:58 [RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
                   ` (2 preceding siblings ...)
  2016-06-01  4:58 ` [RFC v3 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
@ 2016-06-01  4:58 ` Baolin Wang
  3 siblings, 0 replies; 5+ messages in thread
From: Baolin Wang @ 2016-06-01  4:58 UTC (permalink / raw)
  To: axboe, agk, snitzer, dm-devel, herbert, davem
  Cc: ebiggers3, js1304, tadeusz.struk, smueller, standby24x7, shli,
	dan.j.williams, martin.petersen, sagig, kent.overstreet,
	keith.busch, tj, ming.lei, broonie, arnd, linux-crypto,
	linux-block, linux-raid, linux-kernel, baolin.wang

Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
---
 drivers/crypto/omap-aes.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index ce174d3..ab09429 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -804,7 +804,7 @@ static struct crypto_alg algs_ecb_cbc[] = {
 	.cra_priority		= 300,
 	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER |
 				  CRYPTO_ALG_KERN_DRIVER_ONLY |
-				  CRYPTO_ALG_ASYNC,
+				  CRYPTO_ALG_ASYNC | CRYPTO_ALG_BULK,
 	.cra_blocksize		= AES_BLOCK_SIZE,
 	.cra_ctxsize		= sizeof(struct omap_aes_ctx),
 	.cra_alignmask		= 0,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-06-01  4:59 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-01  4:58 [RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer Baolin Wang
2016-06-01  4:58 ` [RFC v3 1/4] block: Introduce blk_bio_map_sg() to map one bio Baolin Wang
2016-06-01  4:58 ` [RFC v3 2/4] crypto: Introduce CRYPTO_ALG_BULK flag Baolin Wang
2016-06-01  4:58 ` [RFC v3 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Baolin Wang
2016-06-01  4:58 ` [RFC v3 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher Baolin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).