linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] swap: Try to scan more free slots even when fragmented
@ 2020-04-27  3:00 Huang Ying
  2020-04-27 21:18 ` Tim Chen
  0 siblings, 1 reply; 2+ messages in thread
From: Huang Ying @ 2020-04-27  3:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Dave Hansen, Michal Hocko,
	Minchan Kim, Tim Chen, Hugh Dickins

Now, the scalability of swap code will drop much when the swap device
becomes fragmented, because the swap slots allocation batching stops
working.  To solve the problem, in this patch, we will try to scan a
little more swap slots with restricted effort to batch the swap slots
allocation even if the swap device is fragmented.  Test shows that the
benchmark score can increase up to 37.1% with the patch.  Details are
as follows.

The swap code has a per-cpu cache of swap slots.  These batch swap
space allocations to improve swap subsystem scaling.  In the following
code path,

  add_to_swap()
    get_swap_page()
      refill_swap_slots_cache()
        get_swap_pages()
	  scan_swap_map_slots()

scan_swap_map_slots() and get_swap_pages() can return multiple swap
slots for each call.  These slots will be cached in the per-CPU swap
slots cache, so that several following swap slot requests will be
fulfilled there to avoid the lock contention in the lower level swap
space allocation/freeing code path.

But this only works when there are free swap clusters.  If a swap
device becomes so fragmented that there's no free swap clusters,
scan_swap_map_slots() and get_swap_pages() will return only one swap
slot for each call in the above code path.  Effectively, this falls
back to the situation before the swap slots cache was introduced, the
heavy lock contention on the swap related locks kills the scalability.

Why does it work in this way?  Because the swap device could be large,
and the free swap slot scanning could be quite time consuming, to
avoid taking too much time to scanning free swap slots, the
conservative method was used.

In fact, this can be improved via scanning a little more free slots
with strictly restricted effort.  Which is implemented in this patch.
In scan_swap_map_slots(), after the first free swap slot is gotten, we
will try to scan a little more, but only if we haven't scanned too
many slots (< LATENCY_LIMIT).  That is, the added scanning latency is
strictly restricted.

To test the patch, we have run 16-process pmbench memory benchmark on
a 2-socket server machine with 48 cores.  Multiple ram disks are
configured as the swap devices.  The pmbench working-set size is much
larger than the available memory so that swapping is triggered.  The
memory read/write ratio is 80/20 and the accessing pattern is random,
so the swap space becomes highly fragmented during the test.  In the
original implementation, the lock contention on swap related locks is
very heavy.  The perf profiling data of the lock contention code path
is as following,

_raw_spin_lock.get_swap_pages.get_swap_page.add_to_swap:             21.03
_raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node:    1.92
_raw_spin_lock_irq.shrink_active_list.shrink_lruvec.shrink_node:      1.72
_raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages:       0.69

While after applying this patch, it becomes,

_raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node:    4.89
_raw_spin_lock_irq.shrink_active_list.shrink_lruvec.shrink_node:      3.85
_raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages:       1.1
_raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_swap_page: 0.88

That is, the lock contention on the swap locks is eliminated.

And the pmbench score increases 37.1%.  The swapin throughput
increases 45.7% from 2.02 GB/s to 2.94 GB/s.  While the swapout
throughput increases 45.3% from 2.04 GB/s to 2.97 GB/s.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
---
 mm/swapfile.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index ee9bd89857a2..a0a123e59ce6 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -739,6 +739,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
 	unsigned long last_in_cluster = 0;
 	int latency_ration = LATENCY_LIMIT;
 	int n_ret = 0;
+	bool scanned_many = false;
 
 	/*
 	 * We try to cluster swap pages by allocating them sequentially
@@ -870,6 +871,25 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
 		goto checks;
 	}
 
+	/*
+	 * Even if there's no free clusters available (fragmented),
+	 * try to scan a little more quickly with lock held unless we
+	 * have scanned too many slots already.
+	 */
+	if (!scanned_many) {
+		unsigned long scan_limit;
+
+		if (offset < scan_base)
+			scan_limit = scan_base;
+		else
+			scan_limit = si->highest_bit;
+		for (; offset <= scan_limit && --latency_ration > 0;
+		     offset++) {
+			if (!si->swap_map[offset])
+				goto checks;
+		}
+	}
+
 done:
 	si->flags -= SWP_SCANNING;
 	return n_ret;
@@ -889,6 +909,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
 		if (unlikely(--latency_ration < 0)) {
 			cond_resched();
 			latency_ration = LATENCY_LIMIT;
+			scanned_many = true;
 		}
 	}
 	offset = si->lowest_bit;
@@ -905,6 +926,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
 		if (unlikely(--latency_ration < 0)) {
 			cond_resched();
 			latency_ration = LATENCY_LIMIT;
+			scanned_many = true;
 		}
 		offset++;
 	}
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] swap: Try to scan more free slots even when fragmented
  2020-04-27  3:00 [PATCH] swap: Try to scan more free slots even when fragmented Huang Ying
@ 2020-04-27 21:18 ` Tim Chen
  0 siblings, 0 replies; 2+ messages in thread
From: Tim Chen @ 2020-04-27 21:18 UTC (permalink / raw)
  To: Huang Ying, Andrew Morton
  Cc: linux-mm, linux-kernel, Dave Hansen, Michal Hocko, Minchan Kim,
	Hugh Dickins



On 4/26/20 8:00 PM, Huang Ying wrote:
> Now, the scalability of swap code will drop much when the swap device
> becomes fragmented, because the swap slots allocation batching stops
> working.  To solve the problem, in this patch, we will try to scan a
> little more swap slots with restricted effort to batch the swap slots
> allocation even if the swap device is fragmented.  Test shows that the
> benchmark score can increase up to 37.1% with the patch.  Details are
> as follows.
> 
> The swap code has a per-cpu cache of swap slots.  These batch swap
> space allocations to improve swap subsystem scaling.  In the following
> code path,
> 
>   add_to_swap()
>     get_swap_page()
>       refill_swap_slots_cache()
>         get_swap_pages()
> 	  scan_swap_map_slots()
> 
> scan_swap_map_slots() and get_swap_pages() can return multiple swap
> slots for each call.  These slots will be cached in the per-CPU swap
> slots cache, so that several following swap slot requests will be
> fulfilled there to avoid the lock contention in the lower level swap
> space allocation/freeing code path.
> 
> But this only works when there are free swap clusters.  If a swap
> device becomes so fragmented that there's no free swap clusters,
> scan_swap_map_slots() and get_swap_pages() will return only one swap
> slot for each call in the above code path.  Effectively, this falls
> back to the situation before the swap slots cache was introduced, the
> heavy lock contention on the swap related locks kills the scalability.
> 
> Why does it work in this way?  Because the swap device could be large,
> and the free swap slot scanning could be quite time consuming, to
> avoid taking too much time to scanning free swap slots, the
> conservative method was used.
> 
> In fact, this can be improved via scanning a little more free slots
> with strictly restricted effort.  Which is implemented in this patch.
> In scan_swap_map_slots(), after the first free swap slot is gotten, we
> will try to scan a little more, but only if we haven't scanned too
> many slots (< LATENCY_LIMIT).  That is, the added scanning latency is
> strictly restricted.
> 
> To test the patch, we have run 16-process pmbench memory benchmark on
> a 2-socket server machine with 48 cores.  Multiple ram disks are
> configured as the swap devices.  The pmbench working-set size is much
> larger than the available memory so that swapping is triggered.  The
> memory read/write ratio is 80/20 and the accessing pattern is random,
> so the swap space becomes highly fragmented during the test.  In the
> original implementation, the lock contention on swap related locks is
> very heavy.  The perf profiling data of the lock contention code path
> is as following,
> 
> _raw_spin_lock.get_swap_pages.get_swap_page.add_to_swap:             21.03
> _raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node:    1.92
> _raw_spin_lock_irq.shrink_active_list.shrink_lruvec.shrink_node:      1.72
> _raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages:       0.69
> 
> While after applying this patch, it becomes,
> 
> _raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node:    4.89
> _raw_spin_lock_irq.shrink_active_list.shrink_lruvec.shrink_node:      3.85
> _raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages:       1.1
> _raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_swap_page: 0.88
> 
> That is, the lock contention on the swap locks is eliminated.
> 
> And the pmbench score increases 37.1%.  The swapin throughput
> increases 45.7% from 2.02 GB/s to 2.94 GB/s.  While the swapout
> throughput increases 45.3% from 2.04 GB/s to 2.97 GB/s.
> 

Thanks.  Acked-by: Tim Chen <tim.c.chen@linux.intel.com>

Tim

> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Hugh Dickins <hughd@google.com>


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-04-27 21:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-27  3:00 [PATCH] swap: Try to scan more free slots even when fragmented Huang Ying
2020-04-27 21:18 ` Tim Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).