linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap)
@ 2020-09-22  2:01 Rik van Riel
  2020-09-22  2:01 ` [PATCH 1/2] mm,swap: extract swap single page readahead into its own function Rik van Riel
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Rik van Riel @ 2020-09-22  2:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, kernel-team, niketa, akpm, sjenning, ddstreet,
	konrad.wilk, hannes

Both with frontswap/zswap, and with some extremely fast IO devices,
swap IO will be done before the "asynchronous" swap_readpage() call
has returned.

In that case, doing swap readahead only wastes memory, increases
latency, and increases the chances of needing to evict something more
useful from memory. In that case, just skip swap readahead.




^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] mm,swap: extract swap single page readahead into its own function
  2020-09-22  2:01 [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Rik van Riel
@ 2020-09-22  2:01 ` Rik van Riel
  2020-09-23  6:32   ` Christoph Hellwig
  2020-09-22  2:01 ` [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously Rik van Riel
  2020-09-22 17:12 ` [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Andrew Morton
  2 siblings, 1 reply; 11+ messages in thread
From: Rik van Riel @ 2020-09-22  2:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, kernel-team, niketa, akpm, sjenning, ddstreet,
	konrad.wilk, hannes, Rik van Riel

Split swap single page readahead into its own function, to make
the next patch easier to read. No functional changes.

Signed-off-by: Rik van Riel <riel@surriel.com>
---
 mm/swap_state.c | 40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index c16eebb81d8b..aacb9ba53f63 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -594,6 +594,28 @@ static unsigned long swapin_nr_pages(unsigned long offset)
 	return pages;
 }
 
+static struct page *swap_cluster_read_one(swp_entry_t entry,
+		unsigned long offset, gfp_t gfp_mask,
+		struct vm_area_struct *vma, unsigned long addr, bool readahead)
+{
+	bool page_allocated;
+	struct page *page;
+
+	page =__read_swap_cache_async(swp_entry(swp_type(entry), offset),
+				      gfp_mask, vma, addr, &page_allocated);
+	if (!page)
+		return NULL;
+	if (page_allocated) {
+		swap_readpage(page, false);
+		if (readahead) {
+			SetPageReadahead(page);
+			count_vm_event(SWAP_RA);
+		}
+	}
+	put_page(page);
+	return page;
+}
+
 /**
  * swap_cluster_readahead - swap in pages in hope we need them soon
  * @entry: swap entry of this memory
@@ -615,14 +637,13 @@ static unsigned long swapin_nr_pages(unsigned long offset)
 struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 				struct vm_fault *vmf)
 {
-	struct page *page;
 	unsigned long entry_offset = swp_offset(entry);
 	unsigned long offset = entry_offset;
 	unsigned long start_offset, end_offset;
 	unsigned long mask;
 	struct swap_info_struct *si = swp_swap_info(entry);
 	struct blk_plug plug;
-	bool do_poll = true, page_allocated;
+	bool do_poll = true;
 	struct vm_area_struct *vma = vmf->vma;
 	unsigned long addr = vmf->address;
 
@@ -649,19 +670,8 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	blk_start_plug(&plug);
 	for (offset = start_offset; offset <= end_offset ; offset++) {
 		/* Ok, do the async read-ahead now */
-		page = __read_swap_cache_async(
-			swp_entry(swp_type(entry), offset),
-			gfp_mask, vma, addr, &page_allocated);
-		if (!page)
-			continue;
-		if (page_allocated) {
-			swap_readpage(page, false);
-			if (offset != entry_offset) {
-				SetPageReadahead(page);
-				count_vm_event(SWAP_RA);
-			}
-		}
-		put_page(page);
+		swap_cluster_read_one(entry, offset, gfp_mask, vma, addr,
+				      offset != entry_offset);
 	}
 	blk_finish_plug(&plug);
 
-- 
2.25.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously
  2020-09-22  2:01 [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Rik van Riel
  2020-09-22  2:01 ` [PATCH 1/2] mm,swap: extract swap single page readahead into its own function Rik van Riel
@ 2020-09-22  2:01 ` Rik van Riel
  2020-09-22  3:13   ` huang ying
  2020-09-23  6:35   ` Christoph Hellwig
  2020-09-22 17:12 ` [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Andrew Morton
  2 siblings, 2 replies; 11+ messages in thread
From: Rik van Riel @ 2020-09-22  2:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, kernel-team, niketa, akpm, sjenning, ddstreet,
	konrad.wilk, hannes, Rik van Riel

Check whether a swap page was obtained instantaneously, for example
because it is in zswap, or on a very fast IO device which uses busy
waiting, and we did not wait on IO to swap in this page.

If no IO was needed to get the swap page we want, kicking off readahead
on surrounding swap pages is likely to be counterproductive, because the
extra loads will cause additional latency, use up extra memory, and chances
are the surrounding pages in swap are just as fast to load as this one,
making readahead pointless.

Signed-off-by: Rik van Riel <riel@surriel.com>
---
 mm/swap_state.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index aacb9ba53f63..6919f9d5fe88 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -637,6 +637,7 @@ static struct page *swap_cluster_read_one(swp_entry_t entry,
 struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 				struct vm_fault *vmf)
 {
+	struct page *page;
 	unsigned long entry_offset = swp_offset(entry);
 	unsigned long offset = entry_offset;
 	unsigned long start_offset, end_offset;
@@ -668,11 +669,18 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 		end_offset = si->max - 1;
 
 	blk_start_plug(&plug);
+	/* If we read the page without waiting on IO, skip readahead. */
+	page = swap_cluster_read_one(entry, offset, gfp_mask, vma, addr, false);
+	if (page && PageUptodate(page))
+		goto skip_unplug;
+
+	/* Ok, do the async read-ahead now. */
 	for (offset = start_offset; offset <= end_offset ; offset++) {
-		/* Ok, do the async read-ahead now */
-		swap_cluster_read_one(entry, offset, gfp_mask, vma, addr,
-				      offset != entry_offset);
+		if (offset == entry_offset)
+			continue;
+		swap_cluster_read_one(entry, offset, gfp_mask, vma, addr, true);
 	}
+skip_unplug:
 	blk_finish_plug(&plug);
 
 	lru_add_drain();	/* Push any new pages onto the LRU now */
-- 
2.25.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously
  2020-09-22  2:01 ` [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously Rik van Riel
@ 2020-09-22  3:13   ` huang ying
  2020-09-22 11:33     ` Rik van Riel
  2020-09-23  6:35   ` Christoph Hellwig
  1 sibling, 1 reply; 11+ messages in thread
From: huang ying @ 2020-09-22  3:13 UTC (permalink / raw)
  To: Rik van Riel
  Cc: LKML, linux-mm, kernel-team, niketa, Andrew Morton,
	Seth Jennings, Dan Streetman, Konrad Rzeszutek Wilk,
	Johannes Weiner, Huang Ying

On Tue, Sep 22, 2020 at 10:02 AM Rik van Riel <riel@surriel.com> wrote:
>
> Check whether a swap page was obtained instantaneously, for example
> because it is in zswap, or on a very fast IO device which uses busy
> waiting, and we did not wait on IO to swap in this page.
> If no IO was needed to get the swap page we want, kicking off readahead
> on surrounding swap pages is likely to be counterproductive, because the
> extra loads will cause additional latency, use up extra memory, and chances
> are the surrounding pages in swap are just as fast to load as this one,
> making readahead pointless.
>
> Signed-off-by: Rik van Riel <riel@surriel.com>
> ---
>  mm/swap_state.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index aacb9ba53f63..6919f9d5fe88 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -637,6 +637,7 @@ static struct page *swap_cluster_read_one(swp_entry_t entry,
>  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>                                 struct vm_fault *vmf)

Why not do this for swap_vma_readahead() too?  swap_cluster_read_one()
can be used in swap_vma_readahead() too.

>  {
> +       struct page *page;
>         unsigned long entry_offset = swp_offset(entry);
>         unsigned long offset = entry_offset;
>         unsigned long start_offset, end_offset;
> @@ -668,11 +669,18 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>                 end_offset = si->max - 1;
>
>         blk_start_plug(&plug);
> +       /* If we read the page without waiting on IO, skip readahead. */
> +       page = swap_cluster_read_one(entry, offset, gfp_mask, vma, addr, false);
> +       if (page && PageUptodate(page))
> +               goto skip_unplug;
> +
> +       /* Ok, do the async read-ahead now. */
>         for (offset = start_offset; offset <= end_offset ; offset++) {
> -               /* Ok, do the async read-ahead now */
> -               swap_cluster_read_one(entry, offset, gfp_mask, vma, addr,
> -                                     offset != entry_offset);
> +               if (offset == entry_offset)
> +                       continue;
> +               swap_cluster_read_one(entry, offset, gfp_mask, vma, addr, true);
>         }
> +skip_unplug:
>         blk_finish_plug(&plug);
>
>         lru_add_drain();        /* Push any new pages onto the LRU now */

Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously
  2020-09-22  3:13   ` huang ying
@ 2020-09-22 11:33     ` Rik van Riel
  0 siblings, 0 replies; 11+ messages in thread
From: Rik van Riel @ 2020-09-22 11:33 UTC (permalink / raw)
  To: huang ying
  Cc: LKML, linux-mm, kernel-team, niketa, Andrew Morton,
	Seth Jennings, Dan Streetman, Konrad Rzeszutek Wilk,
	Johannes Weiner, Huang Ying

[-- Attachment #1: Type: text/plain, Size: 1544 bytes --]

On Tue, 2020-09-22 at 11:13 +0800, huang ying wrote:
> On Tue, Sep 22, 2020 at 10:02 AM Rik van Riel <riel@surriel.com>
> wrote:
> > Check whether a swap page was obtained instantaneously, for example
> > because it is in zswap, or on a very fast IO device which uses busy
> > waiting, and we did not wait on IO to swap in this page.
> > If no IO was needed to get the swap page we want, kicking off
> > readahead
> > on surrounding swap pages is likely to be counterproductive,
> > because the
> > extra loads will cause additional latency, use up extra memory, and
> > chances
> > are the surrounding pages in swap are just as fast to load as this
> > one,
> > making readahead pointless.
> > 
> > Signed-off-by: Rik van Riel <riel@surriel.com>
> > ---
> >  mm/swap_state.c | 14 +++++++++++---
> >  1 file changed, 11 insertions(+), 3 deletions(-)
> > 
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index aacb9ba53f63..6919f9d5fe88 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -637,6 +637,7 @@ static struct page
> > *swap_cluster_read_one(swp_entry_t entry,
> >  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t
> > gfp_mask,
> >                                 struct vm_fault *vmf)
> 
> Why not do this for swap_vma_readahead()
> too?  swap_cluster_read_one()
> can be used in swap_vma_readahead() too.

Good point, I should do the same thing for swap_vma_readahead()
as well. Let me do that and send in a version 2 of the series.

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap)
  2020-09-22  2:01 [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Rik van Riel
  2020-09-22  2:01 ` [PATCH 1/2] mm,swap: extract swap single page readahead into its own function Rik van Riel
  2020-09-22  2:01 ` [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously Rik van Riel
@ 2020-09-22 17:12 ` Andrew Morton
  2020-10-05 17:32   ` Rik van Riel
  2 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2020-09-22 17:12 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, linux-mm, kernel-team, niketa, sjenning, ddstreet,
	konrad.wilk, hannes

On Mon, 21 Sep 2020 22:01:46 -0400 Rik van Riel <riel@surriel.com> wrote:

> Both with frontswap/zswap, and with some extremely fast IO devices,
> swap IO will be done before the "asynchronous" swap_readpage() call
> has returned.
> 
> In that case, doing swap readahead only wastes memory, increases
> latency, and increases the chances of needing to evict something more
> useful from memory. In that case, just skip swap readahead.

Any quantitative testing results?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] mm,swap: extract swap single page readahead into its own function
  2020-09-22  2:01 ` [PATCH 1/2] mm,swap: extract swap single page readahead into its own function Rik van Riel
@ 2020-09-23  6:32   ` Christoph Hellwig
  2020-09-23  8:02     ` Hillf Danton
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2020-09-23  6:32 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, linux-mm, kernel-team, niketa, akpm, sjenning,
	ddstreet, konrad.wilk, hannes

On Mon, Sep 21, 2020 at 10:01:47PM -0400, Rik van Riel wrote:
> +static struct page *swap_cluster_read_one(swp_entry_t entry,
> +		unsigned long offset, gfp_t gfp_mask,
> +		struct vm_area_struct *vma, unsigned long addr, bool readahead)
> +{
> +	bool page_allocated;
> +	struct page *page;
> +
> +	page =__read_swap_cache_async(swp_entry(swp_type(entry), offset),
> +				      gfp_mask, vma, addr, &page_allocated);

Missing whitespace after the "=".

> +	if (!page)
> +		return NULL;
> +	if (page_allocated) {
> +		swap_readpage(page, false);
> +		if (readahead) {
> +			SetPageReadahead(page);
> +			count_vm_event(SWAP_RA);
> +		}
> +	}
> +	put_page(page);
> +	return page;
> +}

I think swap_vma_readahead can be switched to your new helper
pretty trivially as well, as could many of the users of
read_swap_cache_async.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously
  2020-09-22  2:01 ` [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously Rik van Riel
  2020-09-22  3:13   ` huang ying
@ 2020-09-23  6:35   ` Christoph Hellwig
  1 sibling, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2020-09-23  6:35 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, linux-mm, kernel-team, niketa, akpm, sjenning,
	ddstreet, konrad.wilk, hannes

On Mon, Sep 21, 2020 at 10:01:48PM -0400, Rik van Riel wrote:
> +	struct page *page;
>  	unsigned long entry_offset = swp_offset(entry);
>  	unsigned long offset = entry_offset;
>  	unsigned long start_offset, end_offset;
> @@ -668,11 +669,18 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>  		end_offset = si->max - 1;
>  
>  	blk_start_plug(&plug);
> +	/* If we read the page without waiting on IO, skip readahead. */
> +	page = swap_cluster_read_one(entry, offset, gfp_mask, vma, addr, false);
> +	if (page && PageUptodate(page))
> +		goto skip_unplug;
> +

At least for the normal block device path the plug will prevent the
I/O submission from actually happening and thus PageUptodate from
becoming true.  I think we need to split the different code paths
more cleanly.

Btw, what device type and media did you test this with?  What kind of
numbers did you get on what workload?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] mm,swap: extract swap single page readahead into its own function
  2020-09-23  6:32   ` Christoph Hellwig
@ 2020-09-23  8:02     ` Hillf Danton
  0 siblings, 0 replies; 11+ messages in thread
From: Hillf Danton @ 2020-09-23  8:02 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Christoph Hellwig, linux-kernel, linux-mm, kernel-team, niketa,
	akpm, sjenning, ddstreet, konrad.wilk, Hillf Danton, hannes


On Wed, 23 Sep 2020 07:32:43 +0100 Christoph Hellwig wrote:
> On Mon, Sep 21, 2020 at 10:01:47PM -0400, Rik van Riel wrote:
> > +static struct page *swap_cluster_read_one(swp_entry_t entry,
> > +		unsigned long offset, gfp_t gfp_mask,
> > +		struct vm_area_struct *vma, unsigned long addr, bool readahead)
> > +{
> > +	bool page_allocated;
> > +	struct page *page;
> > +
> > +	page =__read_swap_cache_async(swp_entry(swp_type(entry), offset),
> > +				      gfp_mask, vma, addr, &page_allocated);
> 
> Missing whitespace after the "=".
> 
> > +	if (!page)
> > +		return NULL;
> > +	if (page_allocated) {
> > +		swap_readpage(page, false);
> > +		if (readahead) {
> > +			SetPageReadahead(page);
> > +			count_vm_event(SWAP_RA);
> > +		}
> > +	}
> > +	put_page(page);
> > +	return page;
> > +}

Check if put_page() makes page a hot potato at the call site
on the v2 spin.
> 
> I think swap_vma_readahead can be switched to your new helper
> pretty trivially as well, as could many of the users of
> read_swap_cache_async.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap)
  2020-09-22 17:12 ` [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Andrew Morton
@ 2020-10-05 17:32   ` Rik van Riel
  2020-10-09 14:38     ` Rik van Riel
  0 siblings, 1 reply; 11+ messages in thread
From: Rik van Riel @ 2020-10-05 17:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, niketa, sjenning, ddstreet,
	konrad.wilk, hannes

[-- Attachment #1: Type: text/plain, Size: 1017 bytes --]

On Tue, 2020-09-22 at 10:12 -0700, Andrew Morton wrote:
> On Mon, 21 Sep 2020 22:01:46 -0400 Rik van Riel <riel@surriel.com>
> wrote:
> 
> > Both with frontswap/zswap, and with some extremely fast IO devices,
> > swap IO will be done before the "asynchronous" swap_readpage() call
> > has returned.
> > 
> > In that case, doing swap readahead only wastes memory, increases
> > latency, and increases the chances of needing to evict something
> > more
> > useful from memory. In that case, just skip swap readahead.
> 
> Any quantitative testing results?

I have test results with a real workload now.

Without this patch, enabling zswap results in about an 
8% increase in p99 request latency. With these patches,
the latency penalty for enabling zswap is under 1%.

Enabling zswap
allows us to give the main workload a
little more memory, since the spikes in memory demand
caused by things like system management software no 
longer cause large latency issues.

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap)
  2020-10-05 17:32   ` Rik van Riel
@ 2020-10-09 14:38     ` Rik van Riel
  0 siblings, 0 replies; 11+ messages in thread
From: Rik van Riel @ 2020-10-09 14:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, niketa, sjenning, ddstreet,
	konrad.wilk, hannes

[-- Attachment #1: Type: text/plain, Size: 1174 bytes --]

On Mon, 2020-10-05 at 13:32 -0400, Rik van Riel wrote:
> On Tue, 2020-09-22 at 10:12 -0700, Andrew Morton wrote:
> > On Mon, 21 Sep 2020 22:01:46 -0400 Rik van Riel <riel@surriel.com>
> > wrote:

> > Any quantitative testing results?
> 
> I have test results with a real workload now.
> 
> Without this patch, enabling zswap results in about an 
> 8% increase in p99 request latency. With these patches,
> the latency penalty for enabling zswap is under 1%.

Never mind that. On larger tests the effect seems to disappear,
probably because the logic in __swapin_nr_pages() already reduces
the number of pages read ahead to 2 on workloads with lots of
random access.

That reduces the latency effects observed.

Now we might
still see some memory waste due to decompressing
pages we don't need, but I have not seen any real effects from
that yet, either.

I think it may be time to focus on a larger memory waste with
zswap: leaving the compressed copy of memory around when we
decompress the memory at swapin time.  More aggressively freeing
the compressed memory will probably buy us more than reducing
readahead.

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-10-09 14:38 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-22  2:01 [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Rik van Riel
2020-09-22  2:01 ` [PATCH 1/2] mm,swap: extract swap single page readahead into its own function Rik van Riel
2020-09-23  6:32   ` Christoph Hellwig
2020-09-23  8:02     ` Hillf Danton
2020-09-22  2:01 ` [PATCH 2/2] mm,swap: skip swap readahead if page was obtained instantaneously Rik van Riel
2020-09-22  3:13   ` huang ying
2020-09-22 11:33     ` Rik van Riel
2020-09-23  6:35   ` Christoph Hellwig
2020-09-22 17:12 ` [PATCH 0/2] mm,swap: skip swap readahead for instant IO (like zswap) Andrew Morton
2020-10-05 17:32   ` Rik van Riel
2020-10-09 14:38     ` Rik van Riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).