linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not
@ 2018-12-30  4:49 Yang Shi
  2018-12-30  4:49 ` [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead Yang Shi
  2019-01-02 23:00 ` [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Daniel Jordan
  0 siblings, 2 replies; 7+ messages in thread
From: Yang Shi @ 2018-12-30  4:49 UTC (permalink / raw)
  To: ying.huang, tim.c.chen, minchan, akpm; +Cc: yang.shi, linux-mm, linux-kernel

Swap readahead would read in a few pages regardless if the underlying
device is busy or not.  It may incur long waiting time if the device is
congested, and it may also exacerbate the congestion.

Use inode_read_congested() to check if the underlying device is busy or
not like what file page readahead does.  Get inode from swap_info_struct.
Although we can add inode information in swap_address_space
(address_space->host), it may lead some unexpected side effect, i.e.
it may break mapping_cap_account_dirty().  Using inode from
swap_info_struct seems simple and good enough.

Just does the check in vma_cluster_readahead() since
swap_vma_readahead() is just used for non-rotational device which
much less likely has congestion than traditional HDD.

Although swap slots may be consecutive on swap partition, it still may be
fragmented on swap file. This check would help to reduce excessive stall
for such case.

The test on my virtual machine with congested HDD shows long tail
latency is reduced significantly.

Without the patch
 page_fault1_thr-1490  [023]   129.311706: funcgraph_entry:      #57377.796 us |  do_swap_page();
 page_fault1_thr-1490  [023]   129.369103: funcgraph_entry:        5.642us   |  do_swap_page();
 page_fault1_thr-1490  [023]   129.369119: funcgraph_entry:      #1289.592 us |  do_swap_page();
 page_fault1_thr-1490  [023]   129.370411: funcgraph_entry:        4.957us   |  do_swap_page();
 page_fault1_thr-1490  [023]   129.370419: funcgraph_entry:        1.940us   |  do_swap_page();
 page_fault1_thr-1490  [023]   129.378847: funcgraph_entry:      #1411.385 us |  do_swap_page();
 page_fault1_thr-1490  [023]   129.380262: funcgraph_entry:        3.916us   |  do_swap_page();
 page_fault1_thr-1490  [023]   129.380275: funcgraph_entry:      #4287.751 us |  do_swap_page();

With the patch
      runtest.py-1417  [020]   301.925911: funcgraph_entry:      #9870.146 us |  do_swap_page();
      runtest.py-1417  [020]   301.935785: funcgraph_entry:        9.802us   |  do_swap_page();
      runtest.py-1417  [020]   301.935799: funcgraph_entry:        3.551us   |  do_swap_page();
      runtest.py-1417  [020]   301.935806: funcgraph_entry:        2.142us   |  do_swap_page();
      runtest.py-1417  [020]   301.935853: funcgraph_entry:        6.938us   |  do_swap_page();
      runtest.py-1417  [020]   301.935864: funcgraph_entry:        3.765us   |  do_swap_page();
      runtest.py-1417  [020]   301.935871: funcgraph_entry:        3.600us   |  do_swap_page();
      runtest.py-1417  [020]   301.935878: funcgraph_entry:        7.202us   |  do_swap_page();

Acked-by: Tim Chen <tim.c.chen@intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
v4: Added observed effects in the commit log per Andrew
v3: Move inode deference under swap device type check per Tim Chen
v2: Check the swap device type per Tim Chen

 mm/swap_state.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index fd2f21e..78d500e 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -538,11 +538,18 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	bool do_poll = true, page_allocated;
 	struct vm_area_struct *vma = vmf->vma;
 	unsigned long addr = vmf->address;
+	struct inode *inode = NULL;
 
 	mask = swapin_nr_pages(offset) - 1;
 	if (!mask)
 		goto skip;
 
+	if (si->flags & (SWP_BLKDEV | SWP_FS)) {
+		inode = si->swap_file->f_mapping->host;
+		if (inode_read_congested(inode))
+			goto skip;
+	}
+
 	do_poll = false;
 	/* Read a page_cluster sized and aligned cluster around offset. */
 	start_offset = offset & ~mask;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead
  2018-12-30  4:49 [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Yang Shi
@ 2018-12-30  4:49 ` Yang Shi
  2019-01-03  7:41   ` Huang, Ying
  2019-01-02 23:00 ` [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Daniel Jordan
  1 sibling, 1 reply; 7+ messages in thread
From: Yang Shi @ 2018-12-30  4:49 UTC (permalink / raw)
  To: ying.huang, tim.c.chen, minchan, akpm; +Cc: yang.shi, linux-mm, linux-kernel

swap_vma_readahead()'s comment is missed, just add it.

Cc: Huang Ying <ying.huang@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/swap_state.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 78d500e..dd8f698 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -698,6 +698,23 @@ static void swap_ra_info(struct vm_fault *vmf,
 	pte_unmap(orig_pte);
 }
 
+/**
+ * swap_vm_readahead - swap in pages in hope we need them soon
+ * @entry: swap entry of this memory
+ * @gfp_mask: memory allocation flags
+ * @vmf: fault information
+ *
+ * Returns the struct page for entry and addr, after queueing swapin.
+ *
+ * Primitive swap readahead code. We simply read in a few pages whoes
+ * virtual addresses are around the fault address in the same vma.
+ *
+ * This has been extended to use the NUMA policies from the mm triggering
+ * the readahead.
+ *
+ * Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
+ *
+ */
 static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
 				       struct vm_fault *vmf)
 {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not
  2018-12-30  4:49 [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Yang Shi
  2018-12-30  4:49 ` [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead Yang Shi
@ 2019-01-02 23:00 ` Daniel Jordan
  2019-01-03 17:10   ` Yang Shi
  1 sibling, 1 reply; 7+ messages in thread
From: Daniel Jordan @ 2019-01-02 23:00 UTC (permalink / raw)
  To: Yang Shi; +Cc: ying.huang, tim.c.chen, minchan, akpm, linux-mm, linux-kernel

On Sun, Dec 30, 2018 at 12:49:34PM +0800, Yang Shi wrote:
> The test on my virtual machine with congested HDD shows long tail
> latency is reduced significantly.
> 
> Without the patch
>  page_fault1_thr-1490  [023]   129.311706: funcgraph_entry:      #57377.796 us |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.369103: funcgraph_entry:        5.642us   |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.369119: funcgraph_entry:      #1289.592 us |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.370411: funcgraph_entry:        4.957us   |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.370419: funcgraph_entry:        1.940us   |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.378847: funcgraph_entry:      #1411.385 us |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.380262: funcgraph_entry:        3.916us   |  do_swap_page();
>  page_fault1_thr-1490  [023]   129.380275: funcgraph_entry:      #4287.751 us |  do_swap_page();
> 
> With the patch
>       runtest.py-1417  [020]   301.925911: funcgraph_entry:      #9870.146 us |  do_swap_page();
>       runtest.py-1417  [020]   301.935785: funcgraph_entry:        9.802us   |  do_swap_page();
>       runtest.py-1417  [020]   301.935799: funcgraph_entry:        3.551us   |  do_swap_page();
>       runtest.py-1417  [020]   301.935806: funcgraph_entry:        2.142us   |  do_swap_page();
>       runtest.py-1417  [020]   301.935853: funcgraph_entry:        6.938us   |  do_swap_page();
>       runtest.py-1417  [020]   301.935864: funcgraph_entry:        3.765us   |  do_swap_page();
>       runtest.py-1417  [020]   301.935871: funcgraph_entry:        3.600us   |  do_swap_page();
>       runtest.py-1417  [020]   301.935878: funcgraph_entry:        7.202us   |  do_swap_page();

Hi Yang, I guess runtest.py just calls page_fault1_thr?  Being explicit about
this may improve the changelog for those unfamiliar with will-it-scale.

May also be useful to name will-it-scale and how it was run (#thr, runtime,
system cpus/memory/swap) for more context.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead
  2018-12-30  4:49 ` [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead Yang Shi
@ 2019-01-03  7:41   ` Huang, Ying
  2019-01-03 17:12     ` Yang Shi
  0 siblings, 1 reply; 7+ messages in thread
From: Huang, Ying @ 2019-01-03  7:41 UTC (permalink / raw)
  To: Yang Shi; +Cc: tim.c.chen, minchan, akpm, linux-mm, linux-kernel

Yang Shi <yang.shi@linux.alibaba.com> writes:

> swap_vma_readahead()'s comment is missed, just add it.
>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Tim Chen <tim.c.chen@intel.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
>  mm/swap_state.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 78d500e..dd8f698 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -698,6 +698,23 @@ static void swap_ra_info(struct vm_fault *vmf,
>  	pte_unmap(orig_pte);
>  }
>  
> +/**
> + * swap_vm_readahead - swap in pages in hope we need them soon

s/swap_vm_readahead/swap_vma_readahead/

> + * @entry: swap entry of this memory
> + * @gfp_mask: memory allocation flags
> + * @vmf: fault information
> + *
> + * Returns the struct page for entry and addr, after queueing swapin.
> + *
> + * Primitive swap readahead code. We simply read in a few pages whoes
> + * virtual addresses are around the fault address in the same vma.
> + *
> + * This has been extended to use the NUMA policies from the mm triggering
> + * the readahead.

What is this?  I know you copy it from swap_cluster_readahead(), but we
have only one mm for vma readahead.

> + * Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.

Better to make it explicit that your are talking about mmap_sem?

Best Regards,
Huang, Ying

> + *
> + */
>  static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
>  				       struct vm_fault *vmf)
>  {

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not
  2019-01-02 23:00 ` [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Daniel Jordan
@ 2019-01-03 17:10   ` Yang Shi
  2019-01-03 17:16     ` Daniel Jordan
  0 siblings, 1 reply; 7+ messages in thread
From: Yang Shi @ 2019-01-03 17:10 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: ying.huang, tim.c.chen, minchan, akpm, linux-mm, linux-kernel



On 1/2/19 3:00 PM, Daniel Jordan wrote:
> On Sun, Dec 30, 2018 at 12:49:34PM +0800, Yang Shi wrote:
>> The test on my virtual machine with congested HDD shows long tail
>> latency is reduced significantly.
>>
>> Without the patch
>>   page_fault1_thr-1490  [023]   129.311706: funcgraph_entry:      #57377.796 us |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.369103: funcgraph_entry:        5.642us   |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.369119: funcgraph_entry:      #1289.592 us |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.370411: funcgraph_entry:        4.957us   |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.370419: funcgraph_entry:        1.940us   |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.378847: funcgraph_entry:      #1411.385 us |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.380262: funcgraph_entry:        3.916us   |  do_swap_page();
>>   page_fault1_thr-1490  [023]   129.380275: funcgraph_entry:      #4287.751 us |  do_swap_page();
>>
>> With the patch
>>        runtest.py-1417  [020]   301.925911: funcgraph_entry:      #9870.146 us |  do_swap_page();
>>        runtest.py-1417  [020]   301.935785: funcgraph_entry:        9.802us   |  do_swap_page();
>>        runtest.py-1417  [020]   301.935799: funcgraph_entry:        3.551us   |  do_swap_page();
>>        runtest.py-1417  [020]   301.935806: funcgraph_entry:        2.142us   |  do_swap_page();
>>        runtest.py-1417  [020]   301.935853: funcgraph_entry:        6.938us   |  do_swap_page();
>>        runtest.py-1417  [020]   301.935864: funcgraph_entry:        3.765us   |  do_swap_page();
>>        runtest.py-1417  [020]   301.935871: funcgraph_entry:        3.600us   |  do_swap_page();
>>        runtest.py-1417  [020]   301.935878: funcgraph_entry:        7.202us   |  do_swap_page();
> Hi Yang, I guess runtest.py just calls page_fault1_thr?  Being explicit about

Yes, runtest.py is the wrapper script of will-it-scale.

> this may improve the changelog for those unfamiliar with will-it-scale.

Sure.

>
> May also be useful to name will-it-scale and how it was run (#thr, runtime,
> system cpus/memory/swap) for more context.

How about the below description:

The test with page_fault1 of will-it-scale (sometimes tracing may just 
show runtest.py that is the wrapper script of page_fault1), which 
basically launches NR_CPU threads to generate 128MB anonymous pages for 
each thread,  on my virtual machine with congested HDD shows long tail 
latency is reduced significantly.

Without the patch
  page_fault1_thr-1490  [023]   129.311706: funcgraph_entry: #57377.796 
us |  do_swap_page();
  page_fault1_thr-1490  [023]   129.369103: funcgraph_entry: 5.642us   
|  do_swap_page();
  page_fault1_thr-1490  [023]   129.369119: funcgraph_entry: #1289.592 
us |  do_swap_page();
  page_fault1_thr-1490  [023]   129.370411: funcgraph_entry: 4.957us   
|  do_swap_page();
  page_fault1_thr-1490  [023]   129.370419: funcgraph_entry: 1.940us   
|  do_swap_page();
  page_fault1_thr-1490  [023]   129.378847: funcgraph_entry: #1411.385 
us |  do_swap_page();
  page_fault1_thr-1490  [023]   129.380262: funcgraph_entry: 3.916us   
|  do_swap_page();
  page_fault1_thr-1490  [023]   129.380275: funcgraph_entry: #4287.751 
us |  do_swap_page();

With the patch
       runtest.py-1417  [020]   301.925911: funcgraph_entry: #9870.146 
us |  do_swap_page();
       runtest.py-1417  [020]   301.935785: funcgraph_entry: 9.802us   
|  do_swap_page();
       runtest.py-1417  [020]   301.935799: funcgraph_entry: 3.551us   
|  do_swap_page();
       runtest.py-1417  [020]   301.935806: funcgraph_entry: 2.142us   
|  do_swap_page();
       runtest.py-1417  [020]   301.935853: funcgraph_entry: 6.938us   
|  do_swap_page();
       runtest.py-1417  [020]   301.935864: funcgraph_entry: 3.765us   
|  do_swap_page();
       runtest.py-1417  [020]   301.935871: funcgraph_entry: 3.600us   
|  do_swap_page();
       runtest.py-1417  [020]   301.935878: funcgraph_entry: 7.202us   
|  do_swap_page();


Thanks,
Yang



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead
  2019-01-03  7:41   ` Huang, Ying
@ 2019-01-03 17:12     ` Yang Shi
  0 siblings, 0 replies; 7+ messages in thread
From: Yang Shi @ 2019-01-03 17:12 UTC (permalink / raw)
  To: Huang, Ying; +Cc: tim.c.chen, minchan, akpm, linux-mm, linux-kernel



On 1/2/19 11:41 PM, Huang, Ying wrote:
> Yang Shi <yang.shi@linux.alibaba.com> writes:
>
>> swap_vma_readahead()'s comment is missed, just add it.
>>
>> Cc: Huang Ying <ying.huang@intel.com>
>> Cc: Tim Chen <tim.c.chen@intel.com>
>> Cc: Minchan Kim <minchan@kernel.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> ---
>>   mm/swap_state.c | 17 +++++++++++++++++
>>   1 file changed, 17 insertions(+)
>>
>> diff --git a/mm/swap_state.c b/mm/swap_state.c
>> index 78d500e..dd8f698 100644
>> --- a/mm/swap_state.c
>> +++ b/mm/swap_state.c
>> @@ -698,6 +698,23 @@ static void swap_ra_info(struct vm_fault *vmf,
>>   	pte_unmap(orig_pte);
>>   }
>>   
>> +/**
>> + * swap_vm_readahead - swap in pages in hope we need them soon
> s/swap_vm_readahead/swap_vma_readahead/
>
>> + * @entry: swap entry of this memory
>> + * @gfp_mask: memory allocation flags
>> + * @vmf: fault information
>> + *
>> + * Returns the struct page for entry and addr, after queueing swapin.
>> + *
>> + * Primitive swap readahead code. We simply read in a few pages whoes
>> + * virtual addresses are around the fault address in the same vma.
>> + *
>> + * This has been extended to use the NUMA policies from the mm triggering
>> + * the readahead.
> What is this?  I know you copy it from swap_cluster_readahead(), but we
> have only one mm for vma readahead.

Aha, I see. Actually I was confused by this too, so just copied from 
swap_cluster_readahead.

>
>> + * Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
> Better to make it explicit that your are talking about mmap_sem?

Sure.

Thanks,
Yang

>
> Best Regards,
> Huang, Ying
>
>> + *
>> + */
>>   static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
>>   				       struct vm_fault *vmf)
>>   {


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not
  2019-01-03 17:10   ` Yang Shi
@ 2019-01-03 17:16     ` Daniel Jordan
  0 siblings, 0 replies; 7+ messages in thread
From: Daniel Jordan @ 2019-01-03 17:16 UTC (permalink / raw)
  To: Yang Shi
  Cc: Daniel Jordan, ying.huang, tim.c.chen, minchan, akpm, linux-mm,
	linux-kernel

On Thu, Jan 03, 2019 at 09:10:13AM -0800, Yang Shi wrote:
> How about the below description:
> 
> The test with page_fault1 of will-it-scale (sometimes tracing may just show
> runtest.py that is the wrapper script of page_fault1), which basically
> launches NR_CPU threads to generate 128MB anonymous pages for each thread, 
> on my virtual machine with congested HDD shows long tail latency is reduced
> significantly.
> 
> Without the patch
>  page_fault1_thr-1490  [023]   129.311706: funcgraph_entry: #57377.796 us | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.369103: funcgraph_entry: 5.642us   | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.369119: funcgraph_entry: #1289.592 us | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.370411: funcgraph_entry: 4.957us   | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.370419: funcgraph_entry: 1.940us   | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.378847: funcgraph_entry: #1411.385 us | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.380262: funcgraph_entry: 3.916us   | 
> do_swap_page();
>  page_fault1_thr-1490  [023]   129.380275: funcgraph_entry: #4287.751 us | 
> do_swap_page();
> 
> With the patch
>       runtest.py-1417  [020]   301.925911: funcgraph_entry: #9870.146 us | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935785: funcgraph_entry: 9.802us   | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935799: funcgraph_entry: 3.551us   | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935806: funcgraph_entry: 2.142us   | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935853: funcgraph_entry: 6.938us   | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935864: funcgraph_entry: 3.765us   | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935871: funcgraph_entry: 3.600us   | 
> do_swap_page();
>       runtest.py-1417  [020]   301.935878: funcgraph_entry: 7.202us   | 
> do_swap_page();

That's better, thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-01-03 17:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-30  4:49 [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Yang Shi
2018-12-30  4:49 ` [v4 PATCH 2/2] mm: swap: add comment for swap_vma_readahead Yang Shi
2019-01-03  7:41   ` Huang, Ying
2019-01-03 17:12     ` Yang Shi
2019-01-02 23:00 ` [v4 PATCH 1/2] mm: swap: check if swap backing device is congested or not Daniel Jordan
2019-01-03 17:10   ` Yang Shi
2019-01-03 17:16     ` Daniel Jordan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).