linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework
@ 2020-08-19  4:26 Alexander Duyck
  2020-08-19  4:27 ` [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19  4:26 UTC (permalink / raw)
  To: alex.shi
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, alexander.duyck, daniel.m.jordan, linux-mm,
	shakeelb, willy, hannes, tj, cgroups, akpm, richard.weiyang,
	mgorman, iamjoonsoo.kim

So this patch set addresses a few minor issues I have found and is based on
the lrunext branch of the tree at:
https://github.com/alexshi/linux.git

The first three patches address various issues if ound with the patch set
such as the fact that we were skipping non-LRU compound pages one 4K page
at a time, the fact that test_and_set_skip had been made redundant by the
fact that the LRU bit made the setting of the bit exclusive per pageblock,
and the fact that we were using put_page while holding the LRU lock.

The last two patches are some patches I have been experimenting with.
Basically trying to reduce the number of times the LRU lock has to be
released and reacquired by batching LRU work together, or deferring the
freeing/returning of pages to LRU in the case of move_pages_to_lru. I am
still working on generating data but for the fourth patch I have seen an
improvement of about 5% on the will-it-scale/page_fault2 test with THP
enabled by default, however that is just some preliminary data and I still
have a number of tests left to run.

---

Alexander Duyck (5):
      mm: Identify compound pages sooner in isolate_migratepages_block
      mm: Drop use of test_and_set_skip in favor of just setting skip
      mm: Add explicit page decrement in exception path for isolate_lru_pages
      mm: Split release_pages work into 3 passes
      mm: Split move_pages_to_lru into 3 separate passes


 mm/compaction.c |   84 +++++++++++++++---------------------------
 mm/swap.c       |  109 ++++++++++++++++++++++++++++++++++---------------------
 mm/vmscan.c     |   77 +++++++++++++++++++++++----------------
 3 files changed, 142 insertions(+), 128 deletions(-)

--

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block
  2020-08-19  4:26 [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework Alexander Duyck
@ 2020-08-19  4:27 ` Alexander Duyck
  2020-08-19  7:48   ` Alex Shi
  2020-08-19 11:43   ` Matthew Wilcox
  2020-08-19  4:27 ` [RFC PATCH v2 2/5] mm: Drop use of test_and_set_skip in favor of just setting skip Alexander Duyck
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19  4:27 UTC (permalink / raw)
  To: alex.shi
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, alexander.duyck, daniel.m.jordan, linux-mm,
	shakeelb, willy, hannes, tj, cgroups, akpm, richard.weiyang,
	mgorman, iamjoonsoo.kim

From: Alexander Duyck <alexander.h.duyck@linux.intel.com>

Since we are holding a reference to the page much sooner in
isolate_migratepages_block we can move the PageCompound check out of the
LRU locked section and instead just place it after get_page_unless_zero. By
doing this we can allow any of the items that might trigger a failure to
trigger a failure for the compound page rather than the order 0 page and as
a result we should be able to process the pageblock faster.

In addition by testing for PageCompound sooner we can avoid having the LRU
flag cleared and then reset in the exception case. As a result this should
prevent possible races where another thread might be attempting to pull the
LRU pages from the list.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 mm/compaction.c |   33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index d3f87f759773..88c7b950f676 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -984,6 +984,24 @@ static bool too_many_isolated(pg_data_t *pgdat)
 		if (unlikely(!get_page_unless_zero(page)))
 			goto isolate_fail;
 
+		/*
+		 * Page is compound. We know the order before we know if it is
+		 * on the LRU so we cannot assume it is THP. However since the
+		 * page will have the LRU validated shortly we can use the value
+		 * to skip over this page for now or validate the LRU is set and
+		 * then isolate the entire compound page if we are isolating to
+		 * generate a CMA page.
+		 */
+		if (PageCompound(page)) {
+			const unsigned int order = compound_order(page);
+
+			if (likely(order < MAX_ORDER))
+				low_pfn += (1UL << order) - 1;
+
+			if (!cc->alloc_contig)
+				goto isolate_fail_put;
+		}
+
 		if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
 			goto isolate_fail_put;
 
@@ -1009,23 +1027,8 @@ static bool too_many_isolated(pg_data_t *pgdat)
 				if (test_and_set_skip(cc, page, low_pfn))
 					goto isolate_abort;
 			}
-
-			/*
-			 * Page become compound since the non-locked check,
-			 * and it's on LRU. It can only be a THP so the order
-			 * is safe to read and it's 0 for tail pages.
-			 */
-			if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
-				low_pfn += compound_nr(page) - 1;
-				SetPageLRU(page);
-				goto isolate_fail_put;
-			}
 		}
 
-		/* The whole page is taken off the LRU; skip the tail pages. */
-		if (PageCompound(page))
-			low_pfn += compound_nr(page) - 1;
-
 		/* Successfully isolated */
 		del_page_from_lru_list(page, lruvec, page_lru(page));
 		mod_node_page_state(page_pgdat(page),


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v2 2/5] mm: Drop use of test_and_set_skip in favor of just setting skip
  2020-08-19  4:26 [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework Alexander Duyck
  2020-08-19  4:27 ` [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
@ 2020-08-19  4:27 ` Alexander Duyck
  2020-08-19  7:50   ` Alex Shi
  2020-08-19  4:27 ` [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages Alexander Duyck
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19  4:27 UTC (permalink / raw)
  To: alex.shi
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, alexander.duyck, daniel.m.jordan, linux-mm,
	shakeelb, willy, hannes, tj, cgroups, akpm, richard.weiyang,
	mgorman, iamjoonsoo.kim

From: Alexander Duyck <alexander.h.duyck@linux.intel.com>

The only user of test_and_set_skip was isolate_migratepages_block and it
was using it after a call that was testing and clearing the LRU flag. As
such it really didn't need to be behind the LRU lock anymore as it wasn't
really fulfilling its purpose.

Since it is only possible to be able to test and set the skip flag if we
were able to obtain the LRU bit for the first page in the pageblock the
use of the test_and_set_skip becomes redundant as the LRU flag now becomes
the item that limits us to only one thread being able to perform the
operation and there being no need for a test_and_set operation.

With that being the case we can simply drop the bit and instead directly
just call the set_pageblock_skip function if the page we are working on is
the valid_page at the start of the pageblock. Then any other threads that
enter this pageblock should see the skip bit set on the first valid page in
the pageblock.

Since we have dropped the late abort case we can drop the code that was
clearing the LRU flag and calling page_put since the abort case will now
not be holding a reference to a page now.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 mm/compaction.c |   53 +++++++++++++----------------------------------------
 1 file changed, 13 insertions(+), 40 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 88c7b950f676..f986c67e83cc 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -399,29 +399,6 @@ void reset_isolation_suitable(pg_data_t *pgdat)
 	}
 }
 
-/*
- * Sets the pageblock skip bit if it was clear. Note that this is a hint as
- * locks are not required for read/writers. Returns true if it was already set.
- */
-static bool test_and_set_skip(struct compact_control *cc, struct page *page,
-							unsigned long pfn)
-{
-	bool skip;
-
-	/* Do no update if skip hint is being ignored */
-	if (cc->ignore_skip_hint)
-		return false;
-
-	if (!IS_ALIGNED(pfn, pageblock_nr_pages))
-		return false;
-
-	skip = get_pageblock_skip(page);
-	if (!skip && !cc->no_set_skip_hint)
-		set_pageblock_skip(page);
-
-	return skip;
-}
-
 static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
 {
 	struct zone *zone = cc->zone;
@@ -480,12 +457,6 @@ static inline void update_pageblock_skip(struct compact_control *cc,
 static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
 {
 }
-
-static bool test_and_set_skip(struct compact_control *cc, struct page *page,
-							unsigned long pfn)
-{
-	return false;
-}
 #endif /* CONFIG_COMPACTION */
 
 /*
@@ -895,7 +866,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
 		if (!valid_page && IS_ALIGNED(low_pfn, pageblock_nr_pages)) {
 			if (!cc->ignore_skip_hint && get_pageblock_skip(page)) {
 				low_pfn = end_pfn;
-				page = NULL;
 				goto isolate_abort;
 			}
 			valid_page = page;
@@ -1021,11 +991,20 @@ static bool too_many_isolated(pg_data_t *pgdat)
 
 			lruvec_memcg_debug(lruvec, page);
 
-			/* Try get exclusive access under lock */
-			if (!skip_updated) {
+			/*
+			 * Indicate that we want exclusive access to the
+			 * rest of the pageblock.
+			 *
+			 * The LRU flag prevents simultaneous access to the
+			 * first PFN, and the LRU lock helps to prevent
+			 * simultaneous update of multiple pageblocks shared
+			 * in the same bitmap.
+			 */
+			if (page == valid_page) {
+				if (!cc->ignore_skip_hint &&
+				    !cc->no_set_skip_hint)
+					set_pageblock_skip(page);
 				skip_updated = true;
-				if (test_and_set_skip(cc, page, low_pfn))
-					goto isolate_abort;
 			}
 		}
 
@@ -1098,15 +1077,9 @@ static bool too_many_isolated(pg_data_t *pgdat)
 	if (unlikely(low_pfn > end_pfn))
 		low_pfn = end_pfn;
 
-	page = NULL;
-
 isolate_abort:
 	if (lruvec)
 		unlock_page_lruvec_irqrestore(lruvec, flags);
-	if (page) {
-		SetPageLRU(page);
-		put_page(page);
-	}
 
 	/*
 	 * Updated the cached scanner pfn once the pageblock has been scanned


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages
  2020-08-19  4:26 [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework Alexander Duyck
  2020-08-19  4:27 ` [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
  2020-08-19  4:27 ` [RFC PATCH v2 2/5] mm: Drop use of test_and_set_skip in favor of just setting skip Alexander Duyck
@ 2020-08-19  4:27 ` Alexander Duyck
  2020-08-19  7:50   ` Alex Shi
  2020-08-19  4:27 ` [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes Alexander Duyck
  2020-08-19  4:27 ` [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes Alexander Duyck
  4 siblings, 1 reply; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19  4:27 UTC (permalink / raw)
  To: alex.shi
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, alexander.duyck, daniel.m.jordan, linux-mm,
	shakeelb, willy, hannes, tj, cgroups, akpm, richard.weiyang,
	mgorman, iamjoonsoo.kim

From: Alexander Duyck <alexander.h.duyck@linux.intel.com>

In isolate_lru_pages we have an exception path where if we call
get_page_unless_zero and that succeeds, but TestClearPageLRU fails we call
put_page. Normally this would be problematic but due to the way that the
calls are ordered and the fact that we are holding the LRU lock we know
that the caller must be holding another reference for the page. Since we
can assume that we can replace the put_page with a call to
put_page_testzero contained within a WARN_ON. By doing this we should see
if we ever leak a page as a result of the reference count somehow hitting
zero when it shouldn't, and can avoid the overhead and confusion of using
the full put_page call.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 mm/vmscan.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5bc0c2322043..3ebe3f9b653b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1688,10 +1688,13 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 			if (!TestClearPageLRU(page)) {
 				/*
-				 * This page may in other isolation path,
-				 * but we still hold lru_lock.
+				 * This page is being isolated in another
+				 * thread, but we still hold lru_lock. The
+				 * other thread must be holding a reference
+				 * to the page so this should never hit a
+				 * reference count of 0.
 				 */
-				put_page(page);
+				WARN_ON(put_page_testzero(page));
 				goto busy;
 			}
 


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
  2020-08-19  4:26 [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework Alexander Duyck
                   ` (2 preceding siblings ...)
  2020-08-19  4:27 ` [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages Alexander Duyck
@ 2020-08-19  4:27 ` Alexander Duyck
  2020-08-19  7:53   ` Alex Shi
  2020-08-19  4:27 ` [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes Alexander Duyck
  4 siblings, 1 reply; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19  4:27 UTC (permalink / raw)
  To: alex.shi
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, alexander.duyck, daniel.m.jordan, linux-mm,
	shakeelb, willy, hannes, tj, cgroups, akpm, richard.weiyang,
	mgorman, iamjoonsoo.kim

From: Alexander Duyck <alexander.h.duyck@linux.intel.com>

The release_pages function has a number of paths that end up with the
LRU lock having to be released and reacquired. Such an example would be the
freeing of THP pages as it requires releasing the LRU lock so that it can
be potentially reacquired by __put_compound_page.

In order to avoid that we can split the work into 3 passes, the first
without the LRU lock to go through and sort out those pages that are not in
the LRU so they can be freed immediately from those that can't. The second
pass will then go through removing those pages from the LRU in batches as
large as a pagevec can hold before freeing the LRU lock. Once the pages have
been removed from the LRU we can then proceed to free the remaining pages
without needing to worry about if they are in the LRU any further.

The general idea is to avoid bouncing the LRU lock between pages and to
hopefully aggregate the lock for up to the full page vector worth of pages.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 mm/swap.c |  109 +++++++++++++++++++++++++++++++++++++------------------------
 1 file changed, 67 insertions(+), 42 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index fe53449fa1b8..b405f81b2c60 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -795,6 +795,54 @@ void lru_add_drain_all(void)
 }
 #endif
 
+static void __release_page(struct page *page, struct list_head *pages_to_free)
+{
+	if (PageCompound(page)) {
+		__put_compound_page(page);
+	} else {
+		/* Clear Active bit in case of parallel mark_page_accessed */
+		__ClearPageActive(page);
+		__ClearPageWaiters(page);
+
+		list_add(&page->lru, pages_to_free);
+	}
+}
+
+static void __release_lru_pages(struct pagevec *pvec,
+				struct list_head *pages_to_free)
+{
+	struct lruvec *lruvec = NULL;
+	unsigned long flags = 0;
+	int i;
+
+	/*
+	 * The pagevec at this point should contain a set of pages with
+	 * their reference count at 0 and the LRU flag set. We will now
+	 * need to pull the pages from their LRU lists.
+	 *
+	 * We walk the list backwards here since that way we are starting at
+	 * the pages that should be warmest in the cache.
+	 */
+	for (i = pagevec_count(pvec); i--;) {
+		struct page *page = pvec->pages[i];
+
+		lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
+		VM_BUG_ON_PAGE(!PageLRU(page), page);
+		__ClearPageLRU(page);
+		del_page_from_lru_list(page, lruvec, page_off_lru(page));
+	}
+
+	unlock_page_lruvec_irqrestore(lruvec, flags);
+
+	/*
+	 * A batch of pages are no longer on the LRU list. Go through and
+	 * start the final process of returning the deferred pages to their
+	 * appropriate freelists.
+	 */
+	for (i = pagevec_count(pvec); i--;)
+		__release_page(pvec->pages[i], pages_to_free);
+}
+
 /**
  * release_pages - batched put_page()
  * @pages: array of pages to release
@@ -806,32 +854,24 @@ void lru_add_drain_all(void)
 void release_pages(struct page **pages, int nr)
 {
 	int i;
+	struct pagevec pvec;
 	LIST_HEAD(pages_to_free);
-	struct lruvec *lruvec = NULL;
-	unsigned long flags;
-	unsigned int lock_batch;
 
+	pagevec_init(&pvec);
+
+	/*
+	 * We need to first walk through the list cleaning up the low hanging
+	 * fruit and clearing those pages that either cannot be freed or that
+	 * are non-LRU. We will store the LRU pages in a pagevec so that we
+	 * can get to them in the next pass.
+	 */
 	for (i = 0; i < nr; i++) {
 		struct page *page = pages[i];
 
-		/*
-		 * Make sure the IRQ-safe lock-holding time does not get
-		 * excessive with a continuous string of pages from the
-		 * same lruvec. The lock is held only if lruvec != NULL.
-		 */
-		if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
-			unlock_page_lruvec_irqrestore(lruvec, flags);
-			lruvec = NULL;
-		}
-
 		if (is_huge_zero_page(page))
 			continue;
 
 		if (is_zone_device_page(page)) {
-			if (lruvec) {
-				unlock_page_lruvec_irqrestore(lruvec, flags);
-				lruvec = NULL;
-			}
 			/*
 			 * ZONE_DEVICE pages that return 'false' from
 			 * put_devmap_managed_page() do not require special
@@ -848,36 +888,21 @@ void release_pages(struct page **pages, int nr)
 		if (!put_page_testzero(page))
 			continue;
 
-		if (PageCompound(page)) {
-			if (lruvec) {
-				unlock_page_lruvec_irqrestore(lruvec, flags);
-				lruvec = NULL;
-			}
-			__put_compound_page(page);
+		if (!PageLRU(page)) {
+			__release_page(page, &pages_to_free);
 			continue;
 		}
 
-		if (PageLRU(page)) {
-			struct lruvec *prev_lruvec = lruvec;
-
-			lruvec = relock_page_lruvec_irqsave(page, lruvec,
-									&flags);
-			if (prev_lruvec != lruvec)
-				lock_batch = 0;
-
-			VM_BUG_ON_PAGE(!PageLRU(page), page);
-			__ClearPageLRU(page);
-			del_page_from_lru_list(page, lruvec, page_off_lru(page));
+		/* record page so we can get it in the next pass */
+		if (!pagevec_add(&pvec, page)) {
+			__release_lru_pages(&pvec, &pages_to_free);
+			pagevec_reinit(&pvec);
 		}
-
-		/* Clear Active bit in case of parallel mark_page_accessed */
-		__ClearPageActive(page);
-		__ClearPageWaiters(page);
-
-		list_add(&page->lru, &pages_to_free);
 	}
-	if (lruvec)
-		unlock_page_lruvec_irqrestore(lruvec, flags);
+
+	/* flush any remaining LRU pages that need to be processed */
+	if (pagevec_count(&pvec))
+		__release_lru_pages(&pvec, &pages_to_free);
 
 	mem_cgroup_uncharge_list(&pages_to_free);
 	free_unref_page_list(&pages_to_free);


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes
  2020-08-19  4:26 [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework Alexander Duyck
                   ` (3 preceding siblings ...)
  2020-08-19  4:27 ` [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes Alexander Duyck
@ 2020-08-19  4:27 ` Alexander Duyck
  2020-08-19  7:56   ` Alex Shi
  4 siblings, 1 reply; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19  4:27 UTC (permalink / raw)
  To: alex.shi
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, alexander.duyck, daniel.m.jordan, linux-mm,
	shakeelb, willy, hannes, tj, cgroups, akpm, richard.weiyang,
	mgorman, iamjoonsoo.kim

From: Alexander Duyck <alexander.h.duyck@linux.intel.com>

The current code for move_pages_to_lru is meant to release the LRU lock
every time it encounters an unevictable page or a compound page that must
be freed. This results in a fair amount of code bulk because the lruvec has
to be reacquired every time the lock is released and reacquired.

Instead of doing this I believe we can break the code up into 3 passes. The
first pass will identify the pages we can move to LRU and move those. In
addition it will sort the list out leaving the unevictable pages in the
list and moving those pages that have dropped to a reference count of 0 to
pages_to_free. The second pass will return the unevictable pages to the
LRU. The final pass will free any compound pages we have in the
pages_to_free list before we merge it back with the original list and
return from the function.

The advantage of doing it this way is that we only have to release the lock
between pass 1 and 2, and then we reacquire the lock after pass 3 after we
merge the pages_to_free back into the original list. As such we only have
to release the lock at most once in an entire call instead of having to
test to see if we need to relock with each page.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 mm/vmscan.c |   68 ++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 39 insertions(+), 29 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3ebe3f9b653b..6a2bdbc1a9eb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1850,22 +1850,21 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 {
 	int nr_pages, nr_moved = 0;
 	LIST_HEAD(pages_to_free);
-	struct page *page;
-	struct lruvec *orig_lruvec = lruvec;
+	struct page *page, *next;
 	enum lru_list lru;
 
-	while (!list_empty(list)) {
-		page = lru_to_page(list);
+	list_for_each_entry_safe(page, next, list, lru) {
 		VM_BUG_ON_PAGE(PageLRU(page), page);
-		list_del(&page->lru);
-		if (unlikely(!page_evictable(page))) {
-			if (lruvec) {
-				spin_unlock_irq(&lruvec->lru_lock);
-				lruvec = NULL;
-			}
-			putback_lru_page(page);
+
+		/*
+		 * if page is unevictable leave it on the list to be returned
+		 * to the LRU after we have finished processing the other
+		 * entries in the list.
+		 */
+		if (unlikely(!page_evictable(page)))
 			continue;
-		}
+
+		list_del(&page->lru);
 
 		/*
 		 * The SetPageLRU needs to be kept here for list intergrity.
@@ -1878,20 +1877,14 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		 *     list_add(&page->lru,)
 		 *                                        list_add(&page->lru,)
 		 */
-		lruvec = relock_page_lruvec_irq(page, lruvec);
 		SetPageLRU(page);
 
 		if (unlikely(put_page_testzero(page))) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
 
-			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&lruvec->lru_lock);
-				lruvec = NULL;
-				destroy_compound_page(page);
-			} else
-				list_add(&page->lru, &pages_to_free);
-
+			/* defer freeing until we can release lru_lock */
+			list_add(&page->lru, &pages_to_free);
 			continue;
 		}
 
@@ -1904,16 +1897,33 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		if (PageActive(page))
 			workingset_age_nonresident(lruvec, nr_pages);
 	}
-	if (orig_lruvec != lruvec) {
-		if (lruvec)
-			spin_unlock_irq(&lruvec->lru_lock);
-		spin_lock_irq(&orig_lruvec->lru_lock);
-	}
 
-	/*
-	 * To save our caller's stack, now use input list for pages to free.
-	 */
-	list_splice(&pages_to_free, list);
+	if (unlikely(!list_empty(list) || !list_empty(&pages_to_free))) {
+		spin_unlock_irq(&lruvec->lru_lock);
+
+		/* return any unevictable pages to the LRU list */
+		while (!list_empty(list)) {
+			page = lru_to_page(list);
+			list_del(&page->lru);
+			putback_lru_page(page);
+		}
+
+		/*
+		 * To save our caller's stack use input
+		 * list for pages to free.
+		 */
+		list_splice(&pages_to_free, list);
+
+		/* free any compound pages we have in the list */
+		list_for_each_entry_safe(page, next, list, lru) {
+			if (likely(!PageCompound(page)))
+				continue;
+			list_del(&page->lru);
+			destroy_compound_page(page);
+		}
+
+		spin_lock_irq(&lruvec->lru_lock);
+	}
 
 	return nr_moved;
 }


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block
  2020-08-19  4:27 ` [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
@ 2020-08-19  7:48   ` Alex Shi
  2020-08-19 11:43   ` Matthew Wilcox
  1 sibling, 0 replies; 20+ messages in thread
From: Alex Shi @ 2020-08-19  7:48 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, daniel.m.jordan, linux-mm, shakeelb, willy, hannes,
	tj, cgroups, akpm, richard.weiyang, mgorman, iamjoonsoo.kim



在 2020/8/19 下午12:27, Alexander Duyck 写道:
> In addition by testing for PageCompound sooner we can avoid having the LRU
> flag cleared and then reset in the exception case. As a result this should
> prevent possible races where another thread might be attempting to pull the
> LRU pages from the list.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>

Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 2/5] mm: Drop use of test_and_set_skip in favor of just setting skip
  2020-08-19  4:27 ` [RFC PATCH v2 2/5] mm: Drop use of test_and_set_skip in favor of just setting skip Alexander Duyck
@ 2020-08-19  7:50   ` Alex Shi
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Shi @ 2020-08-19  7:50 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, daniel.m.jordan, linux-mm, shakeelb, willy, hannes,
	tj, cgroups, akpm, richard.weiyang, mgorman, iamjoonsoo.kim



在 2020/8/19 下午12:27, Alexander Duyck 写道:
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>

Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages
  2020-08-19  4:27 ` [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages Alexander Duyck
@ 2020-08-19  7:50   ` Alex Shi
  2020-08-19 14:52     ` Alexander Duyck
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Shi @ 2020-08-19  7:50 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, daniel.m.jordan, linux-mm, shakeelb, willy, hannes,
	tj, cgroups, akpm, richard.weiyang, mgorman, iamjoonsoo.kim



在 2020/8/19 下午12:27, Alexander Duyck 写道:
> From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> 
> In isolate_lru_pages we have an exception path where if we call
> get_page_unless_zero and that succeeds, but TestClearPageLRU fails we call
> put_page. Normally this would be problematic but due to the way that the
> calls are ordered and the fact that we are holding the LRU lock we know
> that the caller must be holding another reference for the page. Since we
> can assume that we can replace the put_page with a call to
> put_page_testzero contained within a WARN_ON. By doing this we should see
> if we ever leak a page as a result of the reference count somehow hitting
> zero when it shouldn't, and can avoid the overhead and confusion of using
> the full put_page call.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> ---
>  mm/vmscan.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 5bc0c2322043..3ebe3f9b653b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1688,10 +1688,13 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  
>  			if (!TestClearPageLRU(page)) {
>  				/*
> -				 * This page may in other isolation path,
> -				 * but we still hold lru_lock.
> +				 * This page is being isolated in another
> +				 * thread, but we still hold lru_lock. The
> +				 * other thread must be holding a reference
> +				 * to the page so this should never hit a
> +				 * reference count of 0.
>  				 */
> -				put_page(page);
> +				WARN_ON(put_page_testzero(page));

seems WARN_ON is always enabled.

Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>

>  				goto busy;
>  			}
>  
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
  2020-08-19  4:27 ` [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes Alexander Duyck
@ 2020-08-19  7:53   ` Alex Shi
  2020-08-19 14:57     ` Alexander Duyck
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Shi @ 2020-08-19  7:53 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, daniel.m.jordan, linux-mm, shakeelb, willy, hannes,
	tj, cgroups, akpm, richard.weiyang, mgorman, iamjoonsoo.kim



在 2020/8/19 下午12:27, Alexander Duyck 写道:
> From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> 
> The release_pages function has a number of paths that end up with the
> LRU lock having to be released and reacquired. Such an example would be the
> freeing of THP pages as it requires releasing the LRU lock so that it can
> be potentially reacquired by __put_compound_page.
> 
> In order to avoid that we can split the work into 3 passes, the first
> without the LRU lock to go through and sort out those pages that are not in
> the LRU so they can be freed immediately from those that can't. The second
> pass will then go through removing those pages from the LRU in batches as
> large as a pagevec can hold before freeing the LRU lock. Once the pages have
> been removed from the LRU we can then proceed to free the remaining pages
> without needing to worry about if they are in the LRU any further.
> 
> The general idea is to avoid bouncing the LRU lock between pages and to
> hopefully aggregate the lock for up to the full page vector worth of pages.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> ---
>  mm/swap.c |  109 +++++++++++++++++++++++++++++++++++++------------------------
>  1 file changed, 67 insertions(+), 42 deletions(-)
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index fe53449fa1b8..b405f81b2c60 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -795,6 +795,54 @@ void lru_add_drain_all(void)
>  }
>  #endif
>  
> +static void __release_page(struct page *page, struct list_head *pages_to_free)
> +{
> +	if (PageCompound(page)) {
> +		__put_compound_page(page);
> +	} else {
> +		/* Clear Active bit in case of parallel mark_page_accessed */
> +		__ClearPageActive(page);
> +		__ClearPageWaiters(page);
> +
> +		list_add(&page->lru, pages_to_free);
> +	}
> +}
> +
> +static void __release_lru_pages(struct pagevec *pvec,
> +				struct list_head *pages_to_free)
> +{
> +	struct lruvec *lruvec = NULL;
> +	unsigned long flags = 0;
> +	int i;
> +
> +	/*
> +	 * The pagevec at this point should contain a set of pages with
> +	 * their reference count at 0 and the LRU flag set. We will now
> +	 * need to pull the pages from their LRU lists.
> +	 *
> +	 * We walk the list backwards here since that way we are starting at
> +	 * the pages that should be warmest in the cache.
> +	 */
> +	for (i = pagevec_count(pvec); i--;) {
> +		struct page *page = pvec->pages[i];
> +
> +		lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);

the lock bounce is better with the patch, would you like to do further
like using add_lruvecs to reduce bounce more?

Thanks
Alex

> +		VM_BUG_ON_PAGE(!PageLRU(page), page);
> +		__ClearPageLRU(page);
> +		del_page_from_lru_list(page, lruvec, page_off_lru(page));
> +	}
> +
> +	unlock_page_lruvec_irqrestore(lruvec, flags);
> +
> +	/*
> +	 * A batch of pages are no longer on the LRU list. Go through and
> +	 * start the final process of returning the deferred pages to their
> +	 * appropriate freelists.
> +	 */
> +	for (i = pagevec_count(pvec); i--;)
> +		__release_page(pvec->pages[i], pages_to_free);
> +}
> +
>  /**
>   * release_pages - batched put_page()
>   * @pages: array of pages to release
> @@ -806,32 +854,24 @@ void lru_add_drain_all(void)
>  void release_pages(struct page **pages, int nr)
>  {
>  	int i;
> +	struct pagevec pvec;
>  	LIST_HEAD(pages_to_free);
> -	struct lruvec *lruvec = NULL;
> -	unsigned long flags;
> -	unsigned int lock_batch;
>  
> +	pagevec_init(&pvec);
> +
> +	/*
> +	 * We need to first walk through the list cleaning up the low hanging
> +	 * fruit and clearing those pages that either cannot be freed or that
> +	 * are non-LRU. We will store the LRU pages in a pagevec so that we
> +	 * can get to them in the next pass.
> +	 */
>  	for (i = 0; i < nr; i++) {
>  		struct page *page = pages[i];
>  
> -		/*
> -		 * Make sure the IRQ-safe lock-holding time does not get
> -		 * excessive with a continuous string of pages from the
> -		 * same lruvec. The lock is held only if lruvec != NULL.
> -		 */
> -		if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
> -			unlock_page_lruvec_irqrestore(lruvec, flags);
> -			lruvec = NULL;
> -		}
> -
>  		if (is_huge_zero_page(page))
>  			continue;
>  
>  		if (is_zone_device_page(page)) {
> -			if (lruvec) {
> -				unlock_page_lruvec_irqrestore(lruvec, flags);
> -				lruvec = NULL;
> -			}
>  			/*
>  			 * ZONE_DEVICE pages that return 'false' from
>  			 * put_devmap_managed_page() do not require special
> @@ -848,36 +888,21 @@ void release_pages(struct page **pages, int nr)
>  		if (!put_page_testzero(page))
>  			continue;
>  
> -		if (PageCompound(page)) {
> -			if (lruvec) {
> -				unlock_page_lruvec_irqrestore(lruvec, flags);
> -				lruvec = NULL;
> -			}
> -			__put_compound_page(page);
> +		if (!PageLRU(page)) {
> +			__release_page(page, &pages_to_free);
>  			continue;
>  		}
>  
> -		if (PageLRU(page)) {
> -			struct lruvec *prev_lruvec = lruvec;
> -
> -			lruvec = relock_page_lruvec_irqsave(page, lruvec,
> -									&flags);
> -			if (prev_lruvec != lruvec)
> -				lock_batch = 0;
> -
> -			VM_BUG_ON_PAGE(!PageLRU(page), page);
> -			__ClearPageLRU(page);
> -			del_page_from_lru_list(page, lruvec, page_off_lru(page));
> +		/* record page so we can get it in the next pass */
> +		if (!pagevec_add(&pvec, page)) {
> +			__release_lru_pages(&pvec, &pages_to_free);
> +			pagevec_reinit(&pvec);
>  		}
> -
> -		/* Clear Active bit in case of parallel mark_page_accessed */
> -		__ClearPageActive(page);
> -		__ClearPageWaiters(page);
> -
> -		list_add(&page->lru, &pages_to_free);
>  	}
> -	if (lruvec)
> -		unlock_page_lruvec_irqrestore(lruvec, flags);
> +
> +	/* flush any remaining LRU pages that need to be processed */
> +	if (pagevec_count(&pvec))
> +		__release_lru_pages(&pvec, &pages_to_free);
>  
>  	mem_cgroup_uncharge_list(&pages_to_free);
>  	free_unref_page_list(&pages_to_free);
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes
  2020-08-19  4:27 ` [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes Alexander Duyck
@ 2020-08-19  7:56   ` Alex Shi
  2020-08-19 14:42     ` Alexander Duyck
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Shi @ 2020-08-19  7:56 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, daniel.m.jordan, linux-mm, shakeelb, willy, hannes,
	tj, cgroups, akpm, richard.weiyang, mgorman, iamjoonsoo.kim



在 2020/8/19 下午12:27, Alexander Duyck 写道:
> From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> 
> The current code for move_pages_to_lru is meant to release the LRU lock
> every time it encounters an unevictable page or a compound page that must
> be freed. This results in a fair amount of code bulk because the lruvec has
> to be reacquired every time the lock is released and reacquired.
> 
> Instead of doing this I believe we can break the code up into 3 passes. The
> first pass will identify the pages we can move to LRU and move those. In
> addition it will sort the list out leaving the unevictable pages in the
> list and moving those pages that have dropped to a reference count of 0 to
> pages_to_free. The second pass will return the unevictable pages to the
> LRU. The final pass will free any compound pages we have in the
> pages_to_free list before we merge it back with the original list and
> return from the function.
> 
> The advantage of doing it this way is that we only have to release the lock
> between pass 1 and 2, and then we reacquire the lock after pass 3 after we
> merge the pages_to_free back into the original list. As such we only have
> to release the lock at most once in an entire call instead of having to
> test to see if we need to relock with each page.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> ---
>  mm/vmscan.c |   68 ++++++++++++++++++++++++++++++++++-------------------------
>  1 file changed, 39 insertions(+), 29 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 3ebe3f9b653b..6a2bdbc1a9eb 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1850,22 +1850,21 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  {
>  	int nr_pages, nr_moved = 0;
>  	LIST_HEAD(pages_to_free);
> -	struct page *page;
> -	struct lruvec *orig_lruvec = lruvec;
> +	struct page *page, *next;
>  	enum lru_list lru;
>  
> -	while (!list_empty(list)) {
> -		page = lru_to_page(list);
> +	list_for_each_entry_safe(page, next, list, lru) {
>  		VM_BUG_ON_PAGE(PageLRU(page), page);
> -		list_del(&page->lru);
> -		if (unlikely(!page_evictable(page))) {
> -			if (lruvec) {
> -				spin_unlock_irq(&lruvec->lru_lock);
> -				lruvec = NULL;
> -			}
> -			putback_lru_page(page);
> +
> +		/*
> +		 * if page is unevictable leave it on the list to be returned
> +		 * to the LRU after we have finished processing the other
> +		 * entries in the list.
> +		 */
> +		if (unlikely(!page_evictable(page)))
>  			continue;
> -		}
> +
> +		list_del(&page->lru);
>  
>  		/*
>  		 * The SetPageLRU needs to be kept here for list intergrity.
> @@ -1878,20 +1877,14 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  		 *     list_add(&page->lru,)
>  		 *                                        list_add(&page->lru,)
>  		 */
> -		lruvec = relock_page_lruvec_irq(page, lruvec);

It's actually changed the meaning from current func. which I had seen a bug if no relock.
but after move to 5.9 kernel, I can not reprodce the bug any more. I am not sure if 5.9 fixed 
the problem, and we don't need relock here. 

For the rest of this patch. 
Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>


>  		SetPageLRU(page);
>  
>  		if (unlikely(put_page_testzero(page))) {
>  			__ClearPageLRU(page);
>  			__ClearPageActive(page);
>  
> -			if (unlikely(PageCompound(page))) {
> -				spin_unlock_irq(&lruvec->lru_lock);
> -				lruvec = NULL;
> -				destroy_compound_page(page);
> -			} else
> -				list_add(&page->lru, &pages_to_free);
> -
> +			/* defer freeing until we can release lru_lock */
> +			list_add(&page->lru, &pages_to_free);
>  			continue;
>  		}
>  
> @@ -1904,16 +1897,33 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  		if (PageActive(page))
>  			workingset_age_nonresident(lruvec, nr_pages);
>  	}
> -	if (orig_lruvec != lruvec) {
> -		if (lruvec)
> -			spin_unlock_irq(&lruvec->lru_lock);
> -		spin_lock_irq(&orig_lruvec->lru_lock);
> -	}
>  
> -	/*
> -	 * To save our caller's stack, now use input list for pages to free.
> -	 */
> -	list_splice(&pages_to_free, list);
> +	if (unlikely(!list_empty(list) || !list_empty(&pages_to_free))) {
> +		spin_unlock_irq(&lruvec->lru_lock);
> +
> +		/* return any unevictable pages to the LRU list */
> +		while (!list_empty(list)) {
> +			page = lru_to_page(list);
> +			list_del(&page->lru);
> +			putback_lru_page(page);
> +		}
> +
> +		/*
> +		 * To save our caller's stack use input
> +		 * list for pages to free.
> +		 */
> +		list_splice(&pages_to_free, list);
> +
> +		/* free any compound pages we have in the list */
> +		list_for_each_entry_safe(page, next, list, lru) {
> +			if (likely(!PageCompound(page)))
> +				continue;
> +			list_del(&page->lru);
> +			destroy_compound_page(page);
> +		}
> +
> +		spin_lock_irq(&lruvec->lru_lock);
> +	}
>  
>  	return nr_moved;
>  }
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block
  2020-08-19  4:27 ` [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
  2020-08-19  7:48   ` Alex Shi
@ 2020-08-19 11:43   ` Matthew Wilcox
  2020-08-19 14:48     ` Alexander Duyck
  1 sibling, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2020-08-19 11:43 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: alex.shi, yang.shi, lkp, rong.a.chen, khlebnikov, kirill, hughd,
	linux-kernel, daniel.m.jordan, linux-mm, shakeelb, hannes, tj,
	cgroups, akpm, richard.weiyang, mgorman, iamjoonsoo.kim

On Tue, Aug 18, 2020 at 09:27:05PM -0700, Alexander Duyck wrote:
> +		/*
> +		 * Page is compound. We know the order before we know if it is
> +		 * on the LRU so we cannot assume it is THP. However since the
> +		 * page will have the LRU validated shortly we can use the value
> +		 * to skip over this page for now or validate the LRU is set and
> +		 * then isolate the entire compound page if we are isolating to
> +		 * generate a CMA page.
> +		 */
> +		if (PageCompound(page)) {
> +			const unsigned int order = compound_order(page);
> +
> +			if (likely(order < MAX_ORDER))
> +				low_pfn += (1UL << order) - 1;

Hmm.  You're checking for PageCompound but then skipping 1UL << order.
That only works if PageHead.  If instead this is PageCompound because
it's PageTail, you need to do something like:

				low_pfn |= (1UL << order) - 1;

which will move you to the end of the page you're in the middle of.

If PageTail can't actually happen here, then it's better to check for
PageHead explicitly and WARN_ON if you get a PageTail (eg a page was
combined into a compound page after you processed the earlier head page).

Is it possible the page you've found is hugetlbfs?  Those can have orders
larger than MAX_ORDER.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes
  2020-08-19  7:56   ` Alex Shi
@ 2020-08-19 14:42     ` Alexander Duyck
  2020-08-20  9:56       ` Alex Shi
  0 siblings, 1 reply; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19 14:42 UTC (permalink / raw)
  To: Alex Shi
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim

On Wed, Aug 19, 2020 at 12:58 AM Alex Shi <alex.shi@linux.alibaba.com> wrote:
>
>
>
> 在 2020/8/19 下午12:27, Alexander Duyck 写道:
> > From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> >
> > The current code for move_pages_to_lru is meant to release the LRU lock
> > every time it encounters an unevictable page or a compound page that must
> > be freed. This results in a fair amount of code bulk because the lruvec has
> > to be reacquired every time the lock is released and reacquired.
> >
> > Instead of doing this I believe we can break the code up into 3 passes. The
> > first pass will identify the pages we can move to LRU and move those. In
> > addition it will sort the list out leaving the unevictable pages in the
> > list and moving those pages that have dropped to a reference count of 0 to
> > pages_to_free. The second pass will return the unevictable pages to the
> > LRU. The final pass will free any compound pages we have in the
> > pages_to_free list before we merge it back with the original list and
> > return from the function.
> >
> > The advantage of doing it this way is that we only have to release the lock
> > between pass 1 and 2, and then we reacquire the lock after pass 3 after we
> > merge the pages_to_free back into the original list. As such we only have
> > to release the lock at most once in an entire call instead of having to
> > test to see if we need to relock with each page.
> >
> > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > ---
> >  mm/vmscan.c |   68 ++++++++++++++++++++++++++++++++++-------------------------
> >  1 file changed, 39 insertions(+), 29 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 3ebe3f9b653b..6a2bdbc1a9eb 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1850,22 +1850,21 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >  {
> >       int nr_pages, nr_moved = 0;
> >       LIST_HEAD(pages_to_free);
> > -     struct page *page;
> > -     struct lruvec *orig_lruvec = lruvec;
> > +     struct page *page, *next;
> >       enum lru_list lru;
> >
> > -     while (!list_empty(list)) {
> > -             page = lru_to_page(list);
> > +     list_for_each_entry_safe(page, next, list, lru) {
> >               VM_BUG_ON_PAGE(PageLRU(page), page);
> > -             list_del(&page->lru);
> > -             if (unlikely(!page_evictable(page))) {
> > -                     if (lruvec) {
> > -                             spin_unlock_irq(&lruvec->lru_lock);
> > -                             lruvec = NULL;
> > -                     }
> > -                     putback_lru_page(page);
> > +
> > +             /*
> > +              * if page is unevictable leave it on the list to be returned
> > +              * to the LRU after we have finished processing the other
> > +              * entries in the list.
> > +              */
> > +             if (unlikely(!page_evictable(page)))
> >                       continue;
> > -             }
> > +
> > +             list_del(&page->lru);
> >
> >               /*
> >                * The SetPageLRU needs to be kept here for list intergrity.
> > @@ -1878,20 +1877,14 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >                *     list_add(&page->lru,)
> >                *                                        list_add(&page->lru,)
> >                */
> > -             lruvec = relock_page_lruvec_irq(page, lruvec);
>
> It's actually changed the meaning from current func. which I had seen a bug if no relock.
> but after move to 5.9 kernel, I can not reprodce the bug any more. I am not sure if 5.9 fixed
> the problem, and we don't need relock here.

So I am not sure what you mean here about "changed the meaning from
the current func". Which function are you referring to and what
changed?

From what I can tell the pages cannot change memcg because they were
isolated and had the LRU flag stripped. They shouldn't be able to
change destination LRU vector as a result. Assuming that, then they
can all be processed under same LRU lock and we can avoid having to
release it until we are forced to do so to call putback_lru_page or
destroy the compound pages that were freed while we were shrinking the
LRU lists.

> For the rest of this patch.
> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>

Thanks for the review.

- Alex

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block
  2020-08-19 11:43   ` Matthew Wilcox
@ 2020-08-19 14:48     ` Alexander Duyck
  0 siblings, 0 replies; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19 14:48 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Alex Shi, Yang Shi, kbuild test robot, Rong Chen,
	Konstantin Khlebnikov, Kirill A. Shutemov, Hugh Dickins, LKML,
	Daniel Jordan, linux-mm, Shakeel Butt, Johannes Weiner,
	Tejun Heo, cgroups, Andrew Morton, Wei Yang, Mel Gorman,
	Joonsoo Kim

On Wed, Aug 19, 2020 at 4:43 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Aug 18, 2020 at 09:27:05PM -0700, Alexander Duyck wrote:
> > +             /*
> > +              * Page is compound. We know the order before we know if it is
> > +              * on the LRU so we cannot assume it is THP. However since the
> > +              * page will have the LRU validated shortly we can use the value
> > +              * to skip over this page for now or validate the LRU is set and
> > +              * then isolate the entire compound page if we are isolating to
> > +              * generate a CMA page.
> > +              */
> > +             if (PageCompound(page)) {
> > +                     const unsigned int order = compound_order(page);
> > +
> > +                     if (likely(order < MAX_ORDER))
> > +                             low_pfn += (1UL << order) - 1;
>
> Hmm.  You're checking for PageCompound but then skipping 1UL << order.
> That only works if PageHead.  If instead this is PageCompound because
> it's PageTail, you need to do something like:
>
>                                 low_pfn |= (1UL << order) - 1;
>
> which will move you to the end of the page you're in the middle of.

Can you successfully call get_page_unless_zero in a tail page? I
thought their reference count was 0? There is a get_page_unless_zero
call before the PageCompound check, so I don't think we can get a tail
page.

> If PageTail can't actually happen here, then it's better to check for
> PageHead explicitly and WARN_ON if you get a PageTail (eg a page was
> combined into a compound page after you processed the earlier head page).
>
> Is it possible the page you've found is hugetlbfs?  Those can have orders
> larger than MAX_ORDER.

So in theory we only need to jump pageblock_order. However there are
some architectures where that is not a fixed constant and so it would
have some additional overhead if I am not mistaken. In addition we
should have been only provided a pageblock if i am not mistaken so the
check further down that prevents low_pfn from passing end_pfn should
reset to the correct value.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages
  2020-08-19  7:50   ` Alex Shi
@ 2020-08-19 14:52     ` Alexander Duyck
  0 siblings, 0 replies; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19 14:52 UTC (permalink / raw)
  To: Alex Shi
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim

On Wed, Aug 19, 2020 at 12:52 AM Alex Shi <alex.shi@linux.alibaba.com> wrote:
>
>
>
> 在 2020/8/19 下午12:27, Alexander Duyck 写道:
> > From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> >
> > In isolate_lru_pages we have an exception path where if we call
> > get_page_unless_zero and that succeeds, but TestClearPageLRU fails we call
> > put_page. Normally this would be problematic but due to the way that the
> > calls are ordered and the fact that we are holding the LRU lock we know
> > that the caller must be holding another reference for the page. Since we
> > can assume that we can replace the put_page with a call to
> > put_page_testzero contained within a WARN_ON. By doing this we should see
> > if we ever leak a page as a result of the reference count somehow hitting
> > zero when it shouldn't, and can avoid the overhead and confusion of using
> > the full put_page call.
> >
> > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > ---
> >  mm/vmscan.c |    9 ++++++---
> >  1 file changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 5bc0c2322043..3ebe3f9b653b 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1688,10 +1688,13 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
> >
> >                       if (!TestClearPageLRU(page)) {
> >                               /*
> > -                              * This page may in other isolation path,
> > -                              * but we still hold lru_lock.
> > +                              * This page is being isolated in another
> > +                              * thread, but we still hold lru_lock. The
> > +                              * other thread must be holding a reference
> > +                              * to the page so this should never hit a
> > +                              * reference count of 0.
> >                                */
> > -                             put_page(page);
> > +                             WARN_ON(put_page_testzero(page));
>
> seems WARN_ON is always enabled.
>
> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>

Yeah, it is always enabled however it should never be triggered. I had
considered just putting a page_ref_dec here since in theory this path
should never be triggered but I thought as a debug catch I add the
WARN_ON and put_page_testzero. If we ever do encounter this being
triggered then it will leak a page of memory which isn't the end of
the world but I thought would warrant a WARN_ON.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
  2020-08-19  7:53   ` Alex Shi
@ 2020-08-19 14:57     ` Alexander Duyck
  2020-08-20  9:49       ` Alex Shi
  0 siblings, 1 reply; 20+ messages in thread
From: Alexander Duyck @ 2020-08-19 14:57 UTC (permalink / raw)
  To: Alex Shi
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim

On Wed, Aug 19, 2020 at 12:54 AM Alex Shi <alex.shi@linux.alibaba.com> wrote:
>
>
>
> 在 2020/8/19 下午12:27, Alexander Duyck 写道:
> > From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> >
> > The release_pages function has a number of paths that end up with the
> > LRU lock having to be released and reacquired. Such an example would be the
> > freeing of THP pages as it requires releasing the LRU lock so that it can
> > be potentially reacquired by __put_compound_page.
> >
> > In order to avoid that we can split the work into 3 passes, the first
> > without the LRU lock to go through and sort out those pages that are not in
> > the LRU so they can be freed immediately from those that can't. The second
> > pass will then go through removing those pages from the LRU in batches as
> > large as a pagevec can hold before freeing the LRU lock. Once the pages have
> > been removed from the LRU we can then proceed to free the remaining pages
> > without needing to worry about if they are in the LRU any further.
> >
> > The general idea is to avoid bouncing the LRU lock between pages and to
> > hopefully aggregate the lock for up to the full page vector worth of pages.
> >
> > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > ---
> >  mm/swap.c |  109 +++++++++++++++++++++++++++++++++++++------------------------
> >  1 file changed, 67 insertions(+), 42 deletions(-)
> >
> > diff --git a/mm/swap.c b/mm/swap.c
> > index fe53449fa1b8..b405f81b2c60 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -795,6 +795,54 @@ void lru_add_drain_all(void)
> >  }
> >  #endif
> >
> > +static void __release_page(struct page *page, struct list_head *pages_to_free)
> > +{
> > +     if (PageCompound(page)) {
> > +             __put_compound_page(page);
> > +     } else {
> > +             /* Clear Active bit in case of parallel mark_page_accessed */
> > +             __ClearPageActive(page);
> > +             __ClearPageWaiters(page);
> > +
> > +             list_add(&page->lru, pages_to_free);
> > +     }
> > +}
> > +
> > +static void __release_lru_pages(struct pagevec *pvec,
> > +                             struct list_head *pages_to_free)
> > +{
> > +     struct lruvec *lruvec = NULL;
> > +     unsigned long flags = 0;
> > +     int i;
> > +
> > +     /*
> > +      * The pagevec at this point should contain a set of pages with
> > +      * their reference count at 0 and the LRU flag set. We will now
> > +      * need to pull the pages from their LRU lists.
> > +      *
> > +      * We walk the list backwards here since that way we are starting at
> > +      * the pages that should be warmest in the cache.
> > +      */
> > +     for (i = pagevec_count(pvec); i--;) {
> > +             struct page *page = pvec->pages[i];
> > +
> > +             lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
>
> the lock bounce is better with the patch, would you like to do further
> like using add_lruvecs to reduce bounce more?
>
> Thanks
> Alex

I'm not sure how much doing something like that would add. In my case
I had a very specific issue that this is addressing which is the fact
that every compound page was taking the LRU lock and zone lock
separately. With this patch that is reduced to one LRU lock per 15
pages and then the zone lock per page. By adding or sorting pages by
lruvec I am not sure there will be much benefit as I am not certain
how often we will end up with pages being interleaved between multiple
lruvecs. In addition as I am limiting the quantity to a pagevec which
limits the pages to 15 I am not sure there will be much benefit to be
seen for sorting the pages beforehand.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
  2020-08-19 14:57     ` Alexander Duyck
@ 2020-08-20  9:49       ` Alex Shi
  2020-08-20 14:13         ` Alexander Duyck
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Shi @ 2020-08-20  9:49 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim



在 2020/8/19 下午10:57, Alexander Duyck 写道:
>>>       lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
>> the lock bounce is better with the patch, would you like to do further
>> like using add_lruvecs to reduce bounce more?
>>
>> Thanks
>> Alex
> I'm not sure how much doing something like that would add. In my case
> I had a very specific issue that this is addressing which is the fact
> that every compound page was taking the LRU lock and zone lock
> separately. With this patch that is reduced to one LRU lock per 15
> pages and then the zone lock per page. By adding or sorting pages by
> lruvec I am not sure there will be much benefit as I am not certain
> how often we will end up with pages being interleaved between multiple
> lruvecs. In addition as I am limiting the quantity to a pagevec which
> limits the pages to 15 I am not sure there will be much benefit to be
> seen for sorting the pages beforehand.
> 

the relock will unlock and get another lock again, the cost in that, the 2nd
lock need to wait for fairness for concurrency lruvec locking.
If we can do sort before, we should remove the fairness waiting here. Of course, 
perf result depends on scenarios.

Thanks
Alex

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes
  2020-08-19 14:42     ` Alexander Duyck
@ 2020-08-20  9:56       ` Alex Shi
  2020-08-20 17:15         ` Alexander Duyck
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Shi @ 2020-08-20  9:56 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim



在 2020/8/19 下午10:42, Alexander Duyck 写道:
>> It's actually changed the meaning from current func. which I had seen a bug if no relock.
>> but after move to 5.9 kernel, I can not reprodce the bug any more. I am not sure if 5.9 fixed
>> the problem, and we don't need relock here.
> So I am not sure what you mean here about "changed the meaning from
> the current func". Which function are you referring to and what
> changed?
> 
> From what I can tell the pages cannot change memcg because they were
> isolated and had the LRU flag stripped. They shouldn't be able to
> change destination LRU vector as a result. Assuming that, then they
> can all be processed under same LRU lock and we can avoid having to
> release it until we are forced to do so to call putback_lru_page or
> destroy the compound pages that were freed while we were shrinking the
> LRU lists.
> 

I had sent a bug which base on 5.8 kernel.
https://lkml.org/lkml/2020/7/28/465

I am not sure it was fixed in new kernel. The original line was introduced by Hugh Dickins
I believe it would be great if you can get comments from him.

Thanks
Alex

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
  2020-08-20  9:49       ` Alex Shi
@ 2020-08-20 14:13         ` Alexander Duyck
  0 siblings, 0 replies; 20+ messages in thread
From: Alexander Duyck @ 2020-08-20 14:13 UTC (permalink / raw)
  To: Alex Shi
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim

On Thu, Aug 20, 2020 at 2:51 AM Alex Shi <alex.shi@linux.alibaba.com> wrote:
>
>
>
> 在 2020/8/19 下午10:57, Alexander Duyck 写道:
> >>>       lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
> >> the lock bounce is better with the patch, would you like to do further
> >> like using add_lruvecs to reduce bounce more?
> >>
> >> Thanks
> >> Alex
> > I'm not sure how much doing something like that would add. In my case
> > I had a very specific issue that this is addressing which is the fact
> > that every compound page was taking the LRU lock and zone lock
> > separately. With this patch that is reduced to one LRU lock per 15
> > pages and then the zone lock per page. By adding or sorting pages by
> > lruvec I am not sure there will be much benefit as I am not certain
> > how often we will end up with pages being interleaved between multiple
> > lruvecs. In addition as I am limiting the quantity to a pagevec which
> > limits the pages to 15 I am not sure there will be much benefit to be
> > seen for sorting the pages beforehand.
> >
>
> the relock will unlock and get another lock again, the cost in that, the 2nd
> lock need to wait for fairness for concurrency lruvec locking.
> If we can do sort before, we should remove the fairness waiting here. Of course,
> perf result depends on scenarios.

Agreed. The question is in how many scenarios are you going to have
pages interleaved between more than one lruvec? I suspect in most
cases you should only have one lruvec for all the pages being
processed in a single pagevec.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes
  2020-08-20  9:56       ` Alex Shi
@ 2020-08-20 17:15         ` Alexander Duyck
  0 siblings, 0 replies; 20+ messages in thread
From: Alexander Duyck @ 2020-08-20 17:15 UTC (permalink / raw)
  To: Alex Shi
  Cc: Yang Shi, kbuild test robot, Rong Chen, Konstantin Khlebnikov,
	Kirill A. Shutemov, Hugh Dickins, LKML, Daniel Jordan, linux-mm,
	Shakeel Butt, Matthew Wilcox, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton, Wei Yang, Mel Gorman, Joonsoo Kim

On Thu, Aug 20, 2020 at 2:58 AM Alex Shi <alex.shi@linux.alibaba.com> wrote:
>
>
>
> 在 2020/8/19 下午10:42, Alexander Duyck 写道:
> >> It's actually changed the meaning from current func. which I had seen a bug if no relock.
> >> but after move to 5.9 kernel, I can not reprodce the bug any more. I am not sure if 5.9 fixed
> >> the problem, and we don't need relock here.
> > So I am not sure what you mean here about "changed the meaning from
> > the current func". Which function are you referring to and what
> > changed?
> >
> > From what I can tell the pages cannot change memcg because they were
> > isolated and had the LRU flag stripped. They shouldn't be able to
> > change destination LRU vector as a result. Assuming that, then they
> > can all be processed under same LRU lock and we can avoid having to
> > release it until we are forced to do so to call putback_lru_page or
> > destroy the compound pages that were freed while we were shrinking the
> > LRU lists.
> >
>
> I had sent a bug which base on 5.8 kernel.
> https://lkml.org/lkml/2020/7/28/465
>
> I am not sure it was fixed in new kernel. The original line was introduced by Hugh Dickins
> I believe it would be great if you can get comments from him.

When I brought this up before you had pointed to the relocking being
due to the fact that the function was reacquiring the lruvec for some
reason. I wonder if the fact that the LRU bit stripping serializing
things made it so that the check for the lruvec after releasing the
lock became redundant.

- Alex

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-08-20 17:16 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-19  4:26 [RFC PATCH v2 0/5] Minor cleanups and performance optimizations for LRU rework Alexander Duyck
2020-08-19  4:27 ` [RFC PATCH v2 1/5] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
2020-08-19  7:48   ` Alex Shi
2020-08-19 11:43   ` Matthew Wilcox
2020-08-19 14:48     ` Alexander Duyck
2020-08-19  4:27 ` [RFC PATCH v2 2/5] mm: Drop use of test_and_set_skip in favor of just setting skip Alexander Duyck
2020-08-19  7:50   ` Alex Shi
2020-08-19  4:27 ` [RFC PATCH v2 3/5] mm: Add explicit page decrement in exception path for isolate_lru_pages Alexander Duyck
2020-08-19  7:50   ` Alex Shi
2020-08-19 14:52     ` Alexander Duyck
2020-08-19  4:27 ` [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes Alexander Duyck
2020-08-19  7:53   ` Alex Shi
2020-08-19 14:57     ` Alexander Duyck
2020-08-20  9:49       ` Alex Shi
2020-08-20 14:13         ` Alexander Duyck
2020-08-19  4:27 ` [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes Alexander Duyck
2020-08-19  7:56   ` Alex Shi
2020-08-19 14:42     ` Alexander Duyck
2020-08-20  9:56       ` Alex Shi
2020-08-20 17:15         ` Alexander Duyck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).