All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] blk-mq: fix use-after-free on stale request
@ 2020-08-20 18:03 Ming Lei
  2020-08-20 18:03 ` [PATCH 1/5] blk-mq: define max_order for allocating rqs pages as macro Ming Lei
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Ming Lei @ 2020-08-20 18:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Hannes Reinecke, Bart Van Assche,
	John Garry, Christoph Hellwig

Hi,

We can't run allocating driver tag and updating tags->rqs[tag] atomically,
so stale request may be retrieved from tags->rqs[tag]. More seriously, the
stale request may have been freed via updating nr_requests or switching
elevator or other use cases.

It is one long-term issue, and Jianchao previous worked towards using
static_rqs[] for iterating request, one problem is that it can be hard
to use when iterating over tagset.

This patchset takes another different approach for fixing the issue: cache
freed rqs pages and release them until all tags->rqs[] references on these
pages are gone.

Please review and comment.

[1] https://lore.kernel.org/linux-block/1553492318-1810-1-git-send-email-jianchao.w.wang@oracle.com/
[2] https://marc.info/?t=154526200600007&r=2&w=2


Ming Lei (5):
  blk-mq: define max_order for allocating rqs pages as macro
  blk-mq: add helper of blk_mq_get_hw_queue_node
  blk-mq: add helpers for allocating/freeing pages of request pool
  blk-mq: cache freed request pool pages
  blk-mq: check and shrink freed request pool page

 block/blk-mq.c         | 236 +++++++++++++++++++++++++++++++++--------
 include/linux/blk-mq.h |   4 +
 2 files changed, 198 insertions(+), 42 deletions(-)

Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
-- 
2.25.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] blk-mq: define max_order for allocating rqs pages as macro
  2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
@ 2020-08-20 18:03 ` Ming Lei
  2020-08-20 18:03 ` [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node Ming Lei
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2020-08-20 18:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Hannes Reinecke, Bart Van Assche,
	John Garry, Christoph Hellwig

Inside blk_mq_alloc_rqs(), 'max_order' is actually one const local
variable, define it as macro, and this macro will be re-used
in following patch.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 77d885805699..f9da2d803c18 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2360,10 +2360,12 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
 	return 0;
 }
 
+#define MAX_RQS_PAGE_ORDER 4
+
 int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 		     unsigned int hctx_idx, unsigned int depth)
 {
-	unsigned int i, j, entries_per_page, max_order = 4;
+	unsigned int i, j, entries_per_page;
 	size_t rq_size, left;
 	int node;
 
@@ -2382,7 +2384,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 	left = rq_size * depth;
 
 	for (i = 0; i < depth; ) {
-		int this_order = max_order;
+		int this_order = MAX_RQS_PAGE_ORDER;
 		struct page *page;
 		int to_do;
 		void *p;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node
  2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
  2020-08-20 18:03 ` [PATCH 1/5] blk-mq: define max_order for allocating rqs pages as macro Ming Lei
@ 2020-08-20 18:03 ` Ming Lei
  2020-08-25  8:55   ` John Garry
  2020-08-20 18:03 ` [PATCH 3/5] blk-mq: add helpers for allocating/freeing pages of request pool Ming Lei
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2020-08-20 18:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Hannes Reinecke, Bart Van Assche,
	John Garry, Christoph Hellwig

Add helper of blk_mq_get_hw_queue_node for retrieve hw queue's numa
node.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index f9da2d803c18..5019d21e7ff8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2263,6 +2263,18 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
 }
 EXPORT_SYMBOL_GPL(blk_mq_submit_bio); /* only for request based dm */
 
+static int blk_mq_get_hw_queue_node(struct blk_mq_tag_set *set,
+		unsigned int hctx_idx)
+{
+	int node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT],
+			hctx_idx);
+
+	if (node == NUMA_NO_NODE)
+		node = set->numa_node;
+
+	return node;
+}
+
 void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 		     unsigned int hctx_idx)
 {
@@ -2309,11 +2321,7 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
 					unsigned int reserved_tags)
 {
 	struct blk_mq_tags *tags;
-	int node;
-
-	node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], hctx_idx);
-	if (node == NUMA_NO_NODE)
-		node = set->numa_node;
+	int node = blk_mq_get_hw_queue_node(set, hctx_idx);
 
 	tags = blk_mq_init_tags(nr_tags, reserved_tags, node,
 				BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags));
@@ -2367,11 +2375,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 {
 	unsigned int i, j, entries_per_page;
 	size_t rq_size, left;
-	int node;
-
-	node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], hctx_idx);
-	if (node == NUMA_NO_NODE)
-		node = set->numa_node;
+	int node = blk_mq_get_hw_queue_node(set, hctx_idx);
 
 	INIT_LIST_HEAD(&tags->page_list);
 
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] blk-mq: add helpers for allocating/freeing pages of request pool
  2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
  2020-08-20 18:03 ` [PATCH 1/5] blk-mq: define max_order for allocating rqs pages as macro Ming Lei
  2020-08-20 18:03 ` [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node Ming Lei
@ 2020-08-20 18:03 ` Ming Lei
  2020-08-20 18:03 ` [PATCH 4/5] blk-mq: cache freed request pool pages Ming Lei
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2020-08-20 18:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Hannes Reinecke, Bart Van Assche,
	John Garry, Christoph Hellwig

Add two helpers for allocating and freeing pages of request pool.

No function change.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 81 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 51 insertions(+), 30 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5019d21e7ff8..65f73b8db477 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2275,6 +2275,53 @@ static int blk_mq_get_hw_queue_node(struct blk_mq_tag_set *set,
 	return node;
 }
 
+static size_t order_to_size(unsigned int order)
+{
+	return (size_t)PAGE_SIZE << order;
+}
+
+static struct page *blk_mq_alloc_rqs_page(int node, unsigned order,
+		unsigned min_size)
+{
+	struct page *page;
+	unsigned this_order = order;
+
+	do {
+		page = alloc_pages_node(node,
+			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO,
+			this_order);
+		if (page)
+			break;
+		if (!this_order--)
+			break;
+		if (order_to_size(this_order) < min_size)
+			break;
+	} while (1);
+
+	if (!page)
+		return NULL;
+
+	page->private = this_order;
+
+	/*
+	 * Allow kmemleak to scan these pages as they contain pointers
+	 * to additional allocations like via ops->init_request().
+	 */
+	kmemleak_alloc(page_address(page), order_to_size(this_order), 1, GFP_NOIO);
+
+	return page;
+}
+
+static void blk_mq_free_rqs_page(struct page *page)
+{
+	/*
+	 * Remove kmemleak object previously allocated in
+	 * blk_mq_alloc_rqs().
+	 */
+	kmemleak_free(page_address(page));
+	__free_pages(page, page->private);
+}
+
 void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 		     unsigned int hctx_idx)
 {
@@ -2296,12 +2343,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 	while (!list_empty(&tags->page_list)) {
 		page = list_first_entry(&tags->page_list, struct page, lru);
 		list_del_init(&page->lru);
-		/*
-		 * Remove kmemleak object previously allocated in
-		 * blk_mq_alloc_rqs().
-		 */
-		kmemleak_free(page_address(page));
-		__free_pages(page, page->private);
+		blk_mq_free_rqs_page(page);
 	}
 }
 
@@ -2348,11 +2390,6 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
 	return tags;
 }
 
-static size_t order_to_size(unsigned int order)
-{
-	return (size_t)PAGE_SIZE << order;
-}
-
 static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
 			       unsigned int hctx_idx, int node)
 {
@@ -2396,30 +2433,14 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 		while (this_order && left < order_to_size(this_order - 1))
 			this_order--;
 
-		do {
-			page = alloc_pages_node(node,
-				GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO,
-				this_order);
-			if (page)
-				break;
-			if (!this_order--)
-				break;
-			if (order_to_size(this_order) < rq_size)
-				break;
-		} while (1);
-
+		page = blk_mq_alloc_rqs_page(node, this_order, rq_size);
 		if (!page)
 			goto fail;
 
-		page->private = this_order;
+		this_order = (int)page->private;
 		list_add_tail(&page->lru, &tags->page_list);
-
 		p = page_address(page);
-		/*
-		 * Allow kmemleak to scan these pages as they contain pointers
-		 * to additional allocations like via ops->init_request().
-		 */
-		kmemleak_alloc(p, order_to_size(this_order), 1, GFP_NOIO);
+
 		entries_per_page = order_to_size(this_order) / rq_size;
 		to_do = min(entries_per_page, depth - i);
 		left -= to_do * rq_size;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] blk-mq: cache freed request pool pages
  2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
                   ` (2 preceding siblings ...)
  2020-08-20 18:03 ` [PATCH 3/5] blk-mq: add helpers for allocating/freeing pages of request pool Ming Lei
@ 2020-08-20 18:03 ` Ming Lei
  2020-08-20 18:03 ` [PATCH 5/5] blk-mq: check and shrink freed request pool page Ming Lei
  2020-08-20 20:30 ` [PATCH 0/5] blk-mq: fix use-after-free on stale request Bart Van Assche
  5 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2020-08-20 18:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Hannes Reinecke, Bart Van Assche,
	John Garry, Christoph Hellwig

blk_mq_queue_tag_busy_iter() and blk_mq_tagset_busy_iter() may iterate
request via driver tag. However, driver tag allocation and updating
tags->rqs[tag] can't be done atomically, meantime we don't clear
tags->rqs[tag] before releasing the driver tag in fast path. So the
two iterator functions may see stale request via tags->rq[tag], and the
stale request may have been freed via blk_mq_update_nr_requests() or
elevator switch, then use-after-free warning is triggered.

Fix this issue by caching freed request pool pages in one dedicated
per-tagset list, and always try to allocate request pool pages first from
the cached pages.

Memory waste may be caused, and at most one request pool pages is wasted
for each request queue when request queue elevator is switched to none
from real io sched.

The following patch will add one simple mechanism for reclaiming these
unused pages for allocating request pool.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 98 ++++++++++++++++++++++++++++++++++++------
 include/linux/blk-mq.h |  3 ++
 2 files changed, 87 insertions(+), 14 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 65f73b8db477..c644f5cb1549 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2280,12 +2280,58 @@ static size_t order_to_size(unsigned int order)
 	return (size_t)PAGE_SIZE << order;
 }
 
-static struct page *blk_mq_alloc_rqs_page(int node, unsigned order,
-		unsigned min_size)
+#define MAX_RQS_PAGE_ORDER 4
+
+static void blk_mq_mark_rqs_page(struct page *page, unsigned order,
+		unsigned hctx_idx)
+{
+	WARN_ON_ONCE(order > MAX_RQS_PAGE_ORDER);
+	WARN_ON_ONCE(hctx_idx > (ULONG_MAX >> fls(MAX_RQS_PAGE_ORDER)));
+
+	page->private = (hctx_idx << fls(MAX_RQS_PAGE_ORDER)) | order;
+}
+
+static unsigned blk_mq_rqs_page_order(struct page *page)
+{
+	return page->private & ((1 << fls(MAX_RQS_PAGE_ORDER)) - 1);
+}
+
+static unsigned blk_mq_rqs_page_hctx_idx(struct page *page)
+{
+	return page->private >> fls(MAX_RQS_PAGE_ORDER);
+}
+
+static struct page *blk_mq_alloc_rqs_page_from_cache(
+		struct blk_mq_tag_set *set, unsigned hctx_idx)
+{
+	struct page *page = NULL, *tmp;
+
+	spin_lock(&set->free_page_list_lock);
+	list_for_each_entry(tmp, &set->free_page_list, lru) {
+		if (blk_mq_rqs_page_hctx_idx(tmp) == hctx_idx) {
+			page = tmp;
+			break;
+		}
+	}
+	if (page)
+		list_del_init(&page->lru);
+	spin_unlock(&set->free_page_list_lock);
+
+	return page;
+}
+
+static struct page *blk_mq_alloc_rqs_page(struct blk_mq_tag_set *set,
+		unsigned hctx_idx, unsigned order, unsigned min_size)
 {
 	struct page *page;
 	unsigned this_order = order;
+	int node;
+
+	page = blk_mq_alloc_rqs_page_from_cache(set, hctx_idx);
+	if (page)
+		return page;
 
+	node = blk_mq_get_hw_queue_node(set, hctx_idx);
 	do {
 		page = alloc_pages_node(node,
 			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO,
@@ -2301,7 +2347,7 @@ static struct page *blk_mq_alloc_rqs_page(int node, unsigned order,
 	if (!page)
 		return NULL;
 
-	page->private = this_order;
+	blk_mq_mark_rqs_page(page, this_order, hctx_idx);
 
 	/*
 	 * Allow kmemleak to scan these pages as they contain pointers
@@ -2312,14 +2358,34 @@ static struct page *blk_mq_alloc_rqs_page(int node, unsigned order,
 	return page;
 }
 
-static void blk_mq_free_rqs_page(struct page *page)
+static void blk_mq_release_rqs_page(struct page *page)
 {
-	/*
-	 * Remove kmemleak object previously allocated in
-	 * blk_mq_alloc_rqs().
-	 */
+	/* Remove kmemleak object previously allocated in blk_mq_alloc_rqs() */
 	kmemleak_free(page_address(page));
-	__free_pages(page, page->private);
+	__free_pages(page, blk_mq_rqs_page_order(page));
+}
+
+static void blk_mq_free_rqs_page(struct blk_mq_tag_set *set, struct page *page)
+{
+	spin_lock(&set->free_page_list_lock);
+	list_add_tail(&page->lru, &set->free_page_list);
+	spin_unlock(&set->free_page_list_lock);
+}
+
+static void blk_mq_release_all_rqs_page(struct blk_mq_tag_set *set)
+{
+	struct page *page;
+	LIST_HEAD(pg_list);
+
+	spin_lock(&set->free_page_list_lock);
+	list_splice_init(&set->free_page_list, &pg_list);
+	spin_unlock(&set->free_page_list_lock);
+
+	while (!list_empty(&pg_list)) {
+		page = list_first_entry(&pg_list, struct page, lru);
+		list_del_init(&page->lru);
+		blk_mq_release_rqs_page(page);
+	}
 }
 
 void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
@@ -2343,7 +2409,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 	while (!list_empty(&tags->page_list)) {
 		page = list_first_entry(&tags->page_list, struct page, lru);
 		list_del_init(&page->lru);
-		blk_mq_free_rqs_page(page);
+		blk_mq_free_rqs_page(set, page);
 	}
 }
 
@@ -2405,8 +2471,6 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
 	return 0;
 }
 
-#define MAX_RQS_PAGE_ORDER 4
-
 int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 		     unsigned int hctx_idx, unsigned int depth)
 {
@@ -2433,11 +2497,12 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
 		while (this_order && left < order_to_size(this_order - 1))
 			this_order--;
 
-		page = blk_mq_alloc_rqs_page(node, this_order, rq_size);
+		page = blk_mq_alloc_rqs_page(set, hctx_idx, this_order,
+				rq_size);
 		if (!page)
 			goto fail;
 
-		this_order = (int)page->private;
+		this_order = blk_mq_rqs_page_order(page);
 		list_add_tail(&page->lru, &tags->page_list);
 		p = page_address(page);
 
@@ -3460,6 +3525,9 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
 	if (ret)
 		goto out_free_mq_map;
 
+	spin_lock_init(&set->free_page_list_lock);
+	INIT_LIST_HEAD(&set->free_page_list);
+
 	ret = blk_mq_alloc_map_and_requests(set);
 	if (ret)
 		goto out_free_mq_map;
@@ -3492,6 +3560,8 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set)
 		set->map[j].mq_map = NULL;
 	}
 
+	blk_mq_release_all_rqs_page(set);
+
 	kfree(set->tags);
 	set->tags = NULL;
 }
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index ea3461298de5..4c2b135dbbe1 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -247,6 +247,9 @@ struct blk_mq_tag_set {
 
 	struct mutex		tag_list_lock;
 	struct list_head	tag_list;
+
+	spinlock_t		free_page_list_lock;
+	struct list_head	free_page_list;
 };
 
 /**
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] blk-mq: check and shrink freed request pool page
  2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
                   ` (3 preceding siblings ...)
  2020-08-20 18:03 ` [PATCH 4/5] blk-mq: cache freed request pool pages Ming Lei
@ 2020-08-20 18:03 ` Ming Lei
  2020-08-20 20:30 ` [PATCH 0/5] blk-mq: fix use-after-free on stale request Bart Van Assche
  5 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2020-08-20 18:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ming Lei, Hannes Reinecke, Bart Van Assche,
	John Garry, Christoph Hellwig

request pool pages may take a bit more space, and each request queue may
hold one unused request pool at most, so memory waste can be big when
there are lots of request queues.

Schedule a delayed work to check if tags->rqs[] still may refer to
page in freed request pool page. If no any request in tags->rqs[] refers
to the freed request pool page, release the page now. Otherwise,
schedule the delayed work after 10 seconds for check & release the
pages.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 55 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 56 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index c644f5cb1549..2865920086ea 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2365,11 +2365,63 @@ static void blk_mq_release_rqs_page(struct page *page)
 	__free_pages(page, blk_mq_rqs_page_order(page));
 }
 
+#define SHRINK_RQS_PAGE_DELAY   (10 * HZ)
+
 static void blk_mq_free_rqs_page(struct blk_mq_tag_set *set, struct page *page)
 {
 	spin_lock(&set->free_page_list_lock);
 	list_add_tail(&page->lru, &set->free_page_list);
 	spin_unlock(&set->free_page_list_lock);
+
+	schedule_delayed_work(&set->rqs_page_shrink, SHRINK_RQS_PAGE_DELAY);
+}
+
+static bool blk_mq_can_shrink_rqs_page(struct blk_mq_tag_set *set,
+               struct page *pg)
+{
+	unsigned hctx_idx = blk_mq_rqs_page_hctx_idx(pg);
+	struct blk_mq_tags *tags = set->tags[hctx_idx];
+	unsigned long start = (unsigned long)page_address(pg);
+	unsigned long end = start + order_to_size(blk_mq_rqs_page_order(pg));
+	int i;
+
+	for (i = 0; i < set->queue_depth; i++) {
+		unsigned long rq_addr = (unsigned long)tags->rqs[i];
+		if (rq_addr >= start && rq_addr < end)
+			return false;
+	}
+	return true;
+}
+
+static void blk_mq_rqs_page_shrink_work(struct work_struct *work)
+{
+	struct blk_mq_tag_set *set =
+		container_of(work, struct blk_mq_tag_set, rqs_page_shrink.work);
+	LIST_HEAD(pg_list);
+	struct page *page, *tmp;
+	bool resched;
+
+	spin_lock(&set->free_page_list_lock);
+	list_splice_init(&set->free_page_list, &pg_list);
+	spin_unlock(&set->free_page_list_lock);
+
+	mutex_lock(&set->tag_list_lock);
+	list_for_each_entry_safe(page, tmp, &pg_list, lru) {
+		if (blk_mq_can_shrink_rqs_page(set, page)) {
+			list_del_init(&page->lru);
+			blk_mq_release_rqs_page(page);
+		}
+	}
+	mutex_unlock(&set->tag_list_lock);
+
+	spin_lock(&set->free_page_list_lock);
+	list_splice_init(&pg_list, &set->free_page_list);
+	resched = !list_empty(&set->free_page_list);
+	spin_unlock(&set->free_page_list_lock);
+
+	if (resched)
+		schedule_delayed_work(&set->rqs_page_shrink,
+				SHRINK_RQS_PAGE_DELAY);
 }
 
 static void blk_mq_release_all_rqs_page(struct blk_mq_tag_set *set)
@@ -2377,6 +2429,8 @@ static void blk_mq_release_all_rqs_page(struct blk_mq_tag_set *set)
 	struct page *page;
 	LIST_HEAD(pg_list);
 
+	cancel_delayed_work_sync(&set->rqs_page_shrink);
+
 	spin_lock(&set->free_page_list_lock);
 	list_splice_init(&set->free_page_list, &pg_list);
 	spin_unlock(&set->free_page_list_lock);
@@ -3527,6 +3581,7 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
 
 	spin_lock_init(&set->free_page_list_lock);
 	INIT_LIST_HEAD(&set->free_page_list);
+	INIT_DELAYED_WORK(&set->rqs_page_shrink, blk_mq_rqs_page_shrink_work);
 
 	ret = blk_mq_alloc_map_and_requests(set);
 	if (ret)
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 4c2b135dbbe1..b2adf99dbbef 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -250,6 +250,7 @@ struct blk_mq_tag_set {
 
 	spinlock_t		free_page_list_lock;
 	struct list_head	free_page_list;
+	struct delayed_work     rqs_page_shrink;
 };
 
 /**
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] blk-mq: fix use-after-free on stale request
  2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
                   ` (4 preceding siblings ...)
  2020-08-20 18:03 ` [PATCH 5/5] blk-mq: check and shrink freed request pool page Ming Lei
@ 2020-08-20 20:30 ` Bart Van Assche
  2020-08-21  2:49   ` Ming Lei
  5 siblings, 1 reply; 13+ messages in thread
From: Bart Van Assche @ 2020-08-20 20:30 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Hannes Reinecke, John Garry, Christoph Hellwig

On 8/20/20 11:03 AM, Ming Lei wrote:
> We can't run allocating driver tag and updating tags->rqs[tag] atomically,
> so stale request may be retrieved from tags->rqs[tag]. More seriously, the
> stale request may have been freed via updating nr_requests or switching
> elevator or other use cases.
> 
> It is one long-term issue, and Jianchao previous worked towards using
> static_rqs[] for iterating request, one problem is that it can be hard
> to use when iterating over tagset.
> 
> This patchset takes another different approach for fixing the issue: cache
> freed rqs pages and release them until all tags->rqs[] references on these
> pages are gone.

Hi Ming,

Is this the only possible solution? Would it e.g. be possible to protect the
code that iterates over all tags with rcu_read_lock() / rcu_read_unlock() and
to free pages that contain request pointers only after an RCU grace period has
expired? Would that perhaps result in a simpler solution?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] blk-mq: fix use-after-free on stale request
  2020-08-20 20:30 ` [PATCH 0/5] blk-mq: fix use-after-free on stale request Bart Van Assche
@ 2020-08-21  2:49   ` Ming Lei
  2020-08-26 12:03     ` John Garry
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2020-08-21  2:49 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Hannes Reinecke, John Garry, Christoph Hellwig

Hello Bart,

On Thu, Aug 20, 2020 at 01:30:38PM -0700, Bart Van Assche wrote:
> On 8/20/20 11:03 AM, Ming Lei wrote:
> > We can't run allocating driver tag and updating tags->rqs[tag] atomically,
> > so stale request may be retrieved from tags->rqs[tag]. More seriously, the
> > stale request may have been freed via updating nr_requests or switching
> > elevator or other use cases.
> > 
> > It is one long-term issue, and Jianchao previous worked towards using
> > static_rqs[] for iterating request, one problem is that it can be hard
> > to use when iterating over tagset.
> > 
> > This patchset takes another different approach for fixing the issue: cache
> > freed rqs pages and release them until all tags->rqs[] references on these
> > pages are gone.
> 
> Hi Ming,
> 
> Is this the only possible solution? Would it e.g. be possible to protect the
> code that iterates over all tags with rcu_read_lock() / rcu_read_unlock() and
> to free pages that contain request pointers only after an RCU grace period has
> expired?

That can't work, tags->rqs[] is host-wide, request pool belongs to scheduler tag
and it is owned by request queue actually. When one elevator is switched on this
request queue or updating nr_requests, the old request pool of this queue is freed,
but IOs are still queued from other request queues in this tagset. Elevator switch
or updating nr_requests on one request queue shouldn't or can't other request queues
in the same tagset.

Meantime the reference in tags->rqs[] may stay a bit long, and RCU can't cover this
case. 

Also we can't reset the related tags->rqs[tag] simply somewhere, cause it may
race with new driver tag allocation. Or some atomic update is required,
but obviously extra load is introduced in fast path.

> Would that perhaps result in a simpler solution?

No, that doesn't work actually.

This patchset looks complicated, but the idea is very simple. With this
approach, we can extend to support allocating request pool attached to
driver tags dynamically. So far, it is always pre-allocated, and never be
used for normal single queue disks.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node
  2020-08-20 18:03 ` [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node Ming Lei
@ 2020-08-25  8:55   ` John Garry
  0 siblings, 0 replies; 13+ messages in thread
From: John Garry @ 2020-08-25  8:55 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Hannes Reinecke, Bart Van Assche, Christoph Hellwig

On 20/08/2020 19:03, Ming Lei wrote:
> Add helper of blk_mq_get_hw_queue_node for retrieve hw queue's numa
> node.
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Bart Van Assche <bvanassche@acm.org>
> Cc: John Garry <john.garry@huawei.com>
> Cc: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-mq.c | 24 ++++++++++++++----------
>   1 file changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index f9da2d803c18..5019d21e7ff8 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2263,6 +2263,18 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
>   }
>   EXPORT_SYMBOL_GPL(blk_mq_submit_bio); /* only for request based dm */
>   
> +static int blk_mq_get_hw_queue_node(struct blk_mq_tag_set *set,
> +		unsigned int hctx_idx)
> +{
> +	int node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT],
> +			hctx_idx);

Hi Ming,

Did you consider if we can consolidate all of this to 
blk_mq_hw_queue_to_node(), by passing the set there also (since we 
always use HCTX_TYPE_DEFAULT)? Or is that just exceeding remit of 
blk_mq_hw_queue_to_node()?

I don't think it would affect the other user of 
blk_mq_hw_queue_to_node(), being blk_mq_realloc_hw_ctxs().

But current change looks ok also.

Thanks

> +
> +	if (node == NUMA_NO_NODE)
> +		node = set->numa_node;
> +
> +	return node;
> +}
> +
>   void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
>   		     unsigned int hctx_idx)
>   {
> @@ -2309,11 +2321,7 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
>   					unsigned int reserved_tags)
>   {
>   	struct blk_mq_tags *tags;
> -	int node;
> -
> -	node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], hctx_idx);
> -	if (node == NUMA_NO_NODE)
> -		node = set->numa_node;
> +	int node = blk_mq_get_hw_queue_node(set, hctx_idx);
>   
>   	tags = blk_mq_init_tags(nr_tags, reserved_tags, node,
>   				BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags));
> @@ -2367,11 +2375,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
>   {
>   	unsigned int i, j, entries_per_page;
>   	size_t rq_size, left;
> -	int node;
> -
> -	node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], hctx_idx);
> -	if (node == NUMA_NO_NODE)
> -		node = set->numa_node;
> +	int node = blk_mq_get_hw_queue_node(set, hctx_idx);
>   
>   	INIT_LIST_HEAD(&tags->page_list);
>   
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] blk-mq: fix use-after-free on stale request
  2020-08-21  2:49   ` Ming Lei
@ 2020-08-26 12:03     ` John Garry
  2020-08-26 12:24       ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: John Garry @ 2020-08-26 12:03 UTC (permalink / raw)
  To: Ming Lei, Bart Van Assche
  Cc: Jens Axboe, linux-block, Hannes Reinecke, Christoph Hellwig

On 21/08/2020 03:49, Ming Lei wrote:
> Hello Bart,
> 
> On Thu, Aug 20, 2020 at 01:30:38PM -0700, Bart Van Assche wrote:
>> On 8/20/20 11:03 AM, Ming Lei wrote:
>>> We can't run allocating driver tag and updating tags->rqs[tag] atomically,
>>> so stale request may be retrieved from tags->rqs[tag]. More seriously, the
>>> stale request may have been freed via updating nr_requests or switching
>>> elevator or other use cases.
>>>
>>> It is one long-term issue, and Jianchao previous worked towards using
>>> static_rqs[] for iterating request, one problem is that it can be hard
>>> to use when iterating over tagset.
>>>
>>> This patchset takes another different approach for fixing the issue: cache
>>> freed rqs pages and release them until all tags->rqs[] references on these
>>> pages are gone.
>>
>> Hi Ming,
>>
>> Is this the only possible solution? Would it e.g. be possible to protect the
>> code that iterates over all tags with rcu_read_lock() / rcu_read_unlock() and
>> to free pages that contain request pointers only after an RCU grace period has
>> expired?
> 
> That can't work, tags->rqs[] is host-wide, request pool belongs to scheduler tag
> and it is owned by request queue actually. When one elevator is switched on this
> request queue or updating nr_requests, the old request pool of this queue is freed,
> but IOs are still queued from other request queues in this tagset. Elevator switch
> or updating nr_requests on one request queue shouldn't or can't other request queues
> in the same tagset.
> 
> Meantime the reference in tags->rqs[] may stay a bit long, and RCU can't cover this
> case.
> 
> Also we can't reset the related tags->rqs[tag] simply somewhere, cause it may
> race with new driver tag allocation. 

How about iterate all tags->rqs[] for all scheduler tags when exiting 
the scheduler, etc, and clear any scheduler requests references, like this:

cmpxchg(&hctx->tags->rqs[tag], scheduler_rq, 0);

So we NULLify any tags->rqs[] entries which contain a scheduler request 
of concern atomically, cleaning up any references.

I quickly tried it and it looks to work, but maybe not so elegant.

Or some atomic update is required,
> but obviously extra load is introduced in fast path.

Yes, similar said on this patch:
https://lore.kernel.org/linux-block/cf524178-c497-373c-37f6-abee13eacf19@kernel.dk/

> 
>> Would that perhaps result in a simpler solution?
> 
> No, that doesn't work actually.
> 
> This patchset looks complicated, but the idea is very simple. With this
> approach, we can extend to support allocating request pool attached to
> driver tags dynamically. So far, it is always pre-allocated, and never be
> used for normal single queue disks.
> 

I'll continue to check this solution, but it seems to me that we should 
not get as far as the rq->q == hctx->queue check in bt_iter().

Thanks,
John


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] blk-mq: fix use-after-free on stale request
  2020-08-26 12:03     ` John Garry
@ 2020-08-26 12:24       ` Ming Lei
  2020-08-26 12:34         ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2020-08-26 12:24 UTC (permalink / raw)
  To: John Garry
  Cc: Bart Van Assche, Jens Axboe, linux-block, Hannes Reinecke,
	Christoph Hellwig

On Wed, Aug 26, 2020 at 01:03:37PM +0100, John Garry wrote:
> On 21/08/2020 03:49, Ming Lei wrote:
> > Hello Bart,
> > 
> > On Thu, Aug 20, 2020 at 01:30:38PM -0700, Bart Van Assche wrote:
> > > On 8/20/20 11:03 AM, Ming Lei wrote:
> > > > We can't run allocating driver tag and updating tags->rqs[tag] atomically,
> > > > so stale request may be retrieved from tags->rqs[tag]. More seriously, the
> > > > stale request may have been freed via updating nr_requests or switching
> > > > elevator or other use cases.
> > > > 
> > > > It is one long-term issue, and Jianchao previous worked towards using
> > > > static_rqs[] for iterating request, one problem is that it can be hard
> > > > to use when iterating over tagset.
> > > > 
> > > > This patchset takes another different approach for fixing the issue: cache
> > > > freed rqs pages and release them until all tags->rqs[] references on these
> > > > pages are gone.
> > > 
> > > Hi Ming,
> > > 
> > > Is this the only possible solution? Would it e.g. be possible to protect the
> > > code that iterates over all tags with rcu_read_lock() / rcu_read_unlock() and
> > > to free pages that contain request pointers only after an RCU grace period has
> > > expired?
> > 
> > That can't work, tags->rqs[] is host-wide, request pool belongs to scheduler tag
> > and it is owned by request queue actually. When one elevator is switched on this
> > request queue or updating nr_requests, the old request pool of this queue is freed,
> > but IOs are still queued from other request queues in this tagset. Elevator switch
> > or updating nr_requests on one request queue shouldn't or can't other request queues
> > in the same tagset.
> > 
> > Meantime the reference in tags->rqs[] may stay a bit long, and RCU can't cover this
> > case.
> > 
> > Also we can't reset the related tags->rqs[tag] simply somewhere, cause it may
> > race with new driver tag allocation.
> 
> How about iterate all tags->rqs[] for all scheduler tags when exiting the
> scheduler, etc, and clear any scheduler requests references, like this:
> 
> cmpxchg(&hctx->tags->rqs[tag], scheduler_rq, 0);
> 
> So we NULLify any tags->rqs[] entries which contain a scheduler request of
> concern atomically, cleaning up any references.

Looks this approach can work given cmpxchg() will prevent new store on
this address.

> 
> I quickly tried it and it looks to work, but maybe not so elegant.

I think this way is good enough.


thanks, 
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] blk-mq: fix use-after-free on stale request
  2020-08-26 12:24       ` Ming Lei
@ 2020-08-26 12:34         ` Ming Lei
  2020-08-26 12:56           ` John Garry
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2020-08-26 12:34 UTC (permalink / raw)
  To: John Garry
  Cc: Bart Van Assche, Jens Axboe, linux-block, Hannes Reinecke,
	Christoph Hellwig

On Wed, Aug 26, 2020 at 08:24:07PM +0800, Ming Lei wrote:
> On Wed, Aug 26, 2020 at 01:03:37PM +0100, John Garry wrote:
> > On 21/08/2020 03:49, Ming Lei wrote:
> > > Hello Bart,
> > > 
> > > On Thu, Aug 20, 2020 at 01:30:38PM -0700, Bart Van Assche wrote:
> > > > On 8/20/20 11:03 AM, Ming Lei wrote:
> > > > > We can't run allocating driver tag and updating tags->rqs[tag] atomically,
> > > > > so stale request may be retrieved from tags->rqs[tag]. More seriously, the
> > > > > stale request may have been freed via updating nr_requests or switching
> > > > > elevator or other use cases.
> > > > > 
> > > > > It is one long-term issue, and Jianchao previous worked towards using
> > > > > static_rqs[] for iterating request, one problem is that it can be hard
> > > > > to use when iterating over tagset.
> > > > > 
> > > > > This patchset takes another different approach for fixing the issue: cache
> > > > > freed rqs pages and release them until all tags->rqs[] references on these
> > > > > pages are gone.
> > > > 
> > > > Hi Ming,
> > > > 
> > > > Is this the only possible solution? Would it e.g. be possible to protect the
> > > > code that iterates over all tags with rcu_read_lock() / rcu_read_unlock() and
> > > > to free pages that contain request pointers only after an RCU grace period has
> > > > expired?
> > > 
> > > That can't work, tags->rqs[] is host-wide, request pool belongs to scheduler tag
> > > and it is owned by request queue actually. When one elevator is switched on this
> > > request queue or updating nr_requests, the old request pool of this queue is freed,
> > > but IOs are still queued from other request queues in this tagset. Elevator switch
> > > or updating nr_requests on one request queue shouldn't or can't other request queues
> > > in the same tagset.
> > > 
> > > Meantime the reference in tags->rqs[] may stay a bit long, and RCU can't cover this
> > > case.
> > > 
> > > Also we can't reset the related tags->rqs[tag] simply somewhere, cause it may
> > > race with new driver tag allocation.
> > 
> > How about iterate all tags->rqs[] for all scheduler tags when exiting the
> > scheduler, etc, and clear any scheduler requests references, like this:
> > 
> > cmpxchg(&hctx->tags->rqs[tag], scheduler_rq, 0);
> > 
> > So we NULLify any tags->rqs[] entries which contain a scheduler request of
> > concern atomically, cleaning up any references.
> 
> Looks this approach can work given cmpxchg() will prevent new store on
> this address.

Another process may still be reading this to-be-freed request via
blk_mq_queue_tag_busy_iter or blk_mq_tagset_busy_iter(), meantime NULLify is done
and all requests of this scheduler are freed.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] blk-mq: fix use-after-free on stale request
  2020-08-26 12:34         ` Ming Lei
@ 2020-08-26 12:56           ` John Garry
  0 siblings, 0 replies; 13+ messages in thread
From: John Garry @ 2020-08-26 12:56 UTC (permalink / raw)
  To: Ming Lei
  Cc: Bart Van Assche, Jens Axboe, linux-block, Hannes Reinecke,
	Christoph Hellwig

On 26/08/2020 13:34, Ming Lei wrote:
>>>> Meantime the reference in tags->rqs[] may stay a bit long, and RCU can't cover this
>>>> case.
>>>>
>>>> Also we can't reset the related tags->rqs[tag] simply somewhere, cause it may
>>>> race with new driver tag allocation.
>>> How about iterate all tags->rqs[] for all scheduler tags when exiting the
>>> scheduler, etc, and clear any scheduler requests references, like this:
>>>
>>> cmpxchg(&hctx->tags->rqs[tag], scheduler_rq, 0);
>>>
>>> So we NULLify any tags->rqs[] entries which contain a scheduler request of
>>> concern atomically, cleaning up any references.
>> Looks this approach can work given cmpxchg() will prevent new store on
>> this address.
> Another process may still be reading this to-be-freed request via
> blk_mq_queue_tag_busy_iter or blk_mq_tagset_busy_iter(), meantime NULLify is done
> and all requests of this scheduler are freed.
> 

That seems like another deeper problem. If there is no mechanism to 
guard against this, maybe some reference, sema, etc. needs to be taken 
at the beginning of the iterators to temporarily block anything which 
would mean that the requests are freed.

Thanks,
John

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-08-26 12:58 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-20 18:03 [PATCH 0/5] blk-mq: fix use-after-free on stale request Ming Lei
2020-08-20 18:03 ` [PATCH 1/5] blk-mq: define max_order for allocating rqs pages as macro Ming Lei
2020-08-20 18:03 ` [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node Ming Lei
2020-08-25  8:55   ` John Garry
2020-08-20 18:03 ` [PATCH 3/5] blk-mq: add helpers for allocating/freeing pages of request pool Ming Lei
2020-08-20 18:03 ` [PATCH 4/5] blk-mq: cache freed request pool pages Ming Lei
2020-08-20 18:03 ` [PATCH 5/5] blk-mq: check and shrink freed request pool page Ming Lei
2020-08-20 20:30 ` [PATCH 0/5] blk-mq: fix use-after-free on stale request Bart Van Assche
2020-08-21  2:49   ` Ming Lei
2020-08-26 12:03     ` John Garry
2020-08-26 12:24       ` Ming Lei
2020-08-26 12:34         ` Ming Lei
2020-08-26 12:56           ` John Garry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.