linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm, vmscan: prevent useless kswapd loops
@ 2019-07-01 20:18 Shakeel Butt
  2019-07-01 21:49 ` Yang Shi
  2019-07-03  8:38 ` Mel Gorman
  0 siblings, 2 replies; 3+ messages in thread
From: Shakeel Butt @ 2019-07-01 20:18 UTC (permalink / raw)
  To: Johannes Weiner, Mel Gorman, Michal Hocko, Andrew Morton,
	Yang Shi, Vlastimil Babka, Hillf Danton, Roman Gushchin
  Cc: linux-mm, linux-kernel, Shakeel Butt

On production we have noticed hard lockups on large machines running
large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
isolate_lru_pages was basically skipping GiBs of pages while holding the
LRU spinlock with interrupt disabled.

On further inspection, it seems like there are two issues:

1) If the kswapd on the return from balance_pgdat() could not sleep
(i.e. node is still unbalanced), the classzone_idx is unintentionally
set to 0  and the whole reclaim cycle of kswapd will try to reclaim
only the lowest and smallest zone while traversing the whole memory.

2) Fundamentally isolate_lru_pages() is really bad when the allocation
has woken kswapd for a smaller zone on a very large machine running very
large jobs. It can hoard the LRU spinlock while skipping over 100s of
GiBs of pages.

This patch only fixes the (1). The (2) needs a more fundamental solution.
To fix (1), in the kswapd context, if pgdat->kswapd_classzone_idx is
invalid use the classzone_idx of the previous kswapd loop otherwise use
the one the waker has requested.

Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
due to mismatched classzone_idx")

Signed-off-by: Shakeel Butt <shakeelb@google.com>
---
Changelog since v1:
- fixed the patch based on Yang Shi's comment.

 mm/vmscan.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9e3292ee5c7c..eacf87f07afe 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3760,19 +3760,18 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
 }
 
 /*
- * pgdat->kswapd_classzone_idx is the highest zone index that a recent
- * allocation request woke kswapd for. When kswapd has not woken recently,
- * the value is MAX_NR_ZONES which is not a valid index. This compares a
- * given classzone and returns it or the highest classzone index kswapd
- * was recently woke for.
+ * The pgdat->kswapd_classzone_idx is used to pass the highest zone index to be
+ * reclaimed by kswapd from the waker. If the value is MAX_NR_ZONES which is not
+ * a valid index then either kswapd runs for first time or kswapd couldn't sleep
+ * after previous reclaim attempt (node is still unbalanced). In that case
+ * return the zone index of the previous kswapd reclaim cycle.
  */
 static enum zone_type kswapd_classzone_idx(pg_data_t *pgdat,
-					   enum zone_type classzone_idx)
+					   enum zone_type prev_classzone_idx)
 {
 	if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
-		return classzone_idx;
-
-	return max(pgdat->kswapd_classzone_idx, classzone_idx);
+		return prev_classzone_idx;
+	return pgdat->kswapd_classzone_idx;
 }
 
 static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, int reclaim_order,
@@ -3908,7 +3907,7 @@ static int kswapd(void *p)
 
 		/* Read the new order and classzone_idx */
 		alloc_order = reclaim_order = pgdat->kswapd_order;
-		classzone_idx = kswapd_classzone_idx(pgdat, 0);
+		classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
 		pgdat->kswapd_order = 0;
 		pgdat->kswapd_classzone_idx = MAX_NR_ZONES;
 
@@ -3961,8 +3960,12 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order,
 	if (!cpuset_zone_allowed(zone, gfp_flags))
 		return;
 	pgdat = zone->zone_pgdat;
-	pgdat->kswapd_classzone_idx = kswapd_classzone_idx(pgdat,
-							   classzone_idx);
+
+	if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
+		pgdat->kswapd_classzone_idx = classzone_idx;
+	else
+		pgdat->kswapd_classzone_idx = max(pgdat->kswapd_classzone_idx,
+						  classzone_idx);
 	pgdat->kswapd_order = max(pgdat->kswapd_order, order);
 	if (!waitqueue_active(&pgdat->kswapd_wait))
 		return;
-- 
2.22.0.410.gd8fdbe21b5-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm, vmscan: prevent useless kswapd loops
  2019-07-01 20:18 [PATCH v2] mm, vmscan: prevent useless kswapd loops Shakeel Butt
@ 2019-07-01 21:49 ` Yang Shi
  2019-07-03  8:38 ` Mel Gorman
  1 sibling, 0 replies; 3+ messages in thread
From: Yang Shi @ 2019-07-01 21:49 UTC (permalink / raw)
  To: Shakeel Butt, Johannes Weiner, Mel Gorman, Michal Hocko,
	Andrew Morton, Vlastimil Babka, Hillf Danton, Roman Gushchin
  Cc: linux-mm, linux-kernel



On 7/1/19 1:18 PM, Shakeel Butt wrote:
> On production we have noticed hard lockups on large machines running
> large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
> GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
> isolate_lru_pages was basically skipping GiBs of pages while holding the
> LRU spinlock with interrupt disabled.
>
> On further inspection, it seems like there are two issues:
>
> 1) If the kswapd on the return from balance_pgdat() could not sleep
> (i.e. node is still unbalanced), the classzone_idx is unintentionally
> set to 0  and the whole reclaim cycle of kswapd will try to reclaim
> only the lowest and smallest zone while traversing the whole memory.
>
> 2) Fundamentally isolate_lru_pages() is really bad when the allocation
> has woken kswapd for a smaller zone on a very large machine running very
> large jobs. It can hoard the LRU spinlock while skipping over 100s of
> GiBs of pages.
>
> This patch only fixes the (1). The (2) needs a more fundamental solution.
> To fix (1), in the kswapd context, if pgdat->kswapd_classzone_idx is
> invalid use the classzone_idx of the previous kswapd loop otherwise use
> the one the waker has requested.
>
> Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
> due to mismatched classzone_idx")
>
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> ---
> Changelog since v1:
> - fixed the patch based on Yang Shi's comment.
>
>   mm/vmscan.c | 27 +++++++++++++++------------
>   1 file changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9e3292ee5c7c..eacf87f07afe 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3760,19 +3760,18 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
>   }
>   
>   /*
> - * pgdat->kswapd_classzone_idx is the highest zone index that a recent
> - * allocation request woke kswapd for. When kswapd has not woken recently,
> - * the value is MAX_NR_ZONES which is not a valid index. This compares a
> - * given classzone and returns it or the highest classzone index kswapd
> - * was recently woke for.
> + * The pgdat->kswapd_classzone_idx is used to pass the highest zone index to be
> + * reclaimed by kswapd from the waker. If the value is MAX_NR_ZONES which is not
> + * a valid index then either kswapd runs for first time or kswapd couldn't sleep
> + * after previous reclaim attempt (node is still unbalanced). In that case
> + * return the zone index of the previous kswapd reclaim cycle.
>    */
>   static enum zone_type kswapd_classzone_idx(pg_data_t *pgdat,
> -					   enum zone_type classzone_idx)
> +					   enum zone_type prev_classzone_idx)
>   {
>   	if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
> -		return classzone_idx;
> -
> -	return max(pgdat->kswapd_classzone_idx, classzone_idx);
> +		return prev_classzone_idx;
> +	return pgdat->kswapd_classzone_idx;
>   }
>   
>   static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, int reclaim_order,
> @@ -3908,7 +3907,7 @@ static int kswapd(void *p)
>   
>   		/* Read the new order and classzone_idx */
>   		alloc_order = reclaim_order = pgdat->kswapd_order;
> -		classzone_idx = kswapd_classzone_idx(pgdat, 0);
> +		classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
>   		pgdat->kswapd_order = 0;
>   		pgdat->kswapd_classzone_idx = MAX_NR_ZONES;
>   
> @@ -3961,8 +3960,12 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order,
>   	if (!cpuset_zone_allowed(zone, gfp_flags))
>   		return;
>   	pgdat = zone->zone_pgdat;
> -	pgdat->kswapd_classzone_idx = kswapd_classzone_idx(pgdat,
> -							   classzone_idx);
> +
> +	if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
> +		pgdat->kswapd_classzone_idx = classzone_idx;
> +	else
> +		pgdat->kswapd_classzone_idx = max(pgdat->kswapd_classzone_idx,
> +						  classzone_idx);
>   	pgdat->kswapd_order = max(pgdat->kswapd_order, order);
>   	if (!waitqueue_active(&pgdat->kswapd_wait))
>   		return;

I agree the manipulation to classzone_idx looks convoluted. This version 
looks correct to me. You could add: Reviewed-by: Yang Shi 
<yang.shi@linux.alibaba.com>



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm, vmscan: prevent useless kswapd loops
  2019-07-01 20:18 [PATCH v2] mm, vmscan: prevent useless kswapd loops Shakeel Butt
  2019-07-01 21:49 ` Yang Shi
@ 2019-07-03  8:38 ` Mel Gorman
  1 sibling, 0 replies; 3+ messages in thread
From: Mel Gorman @ 2019-07-03  8:38 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Yang Shi,
	Vlastimil Babka, Hillf Danton, Roman Gushchin, linux-mm,
	linux-kernel

On Mon, Jul 01, 2019 at 01:18:47PM -0700, Shakeel Butt wrote:
> On production we have noticed hard lockups on large machines running
> large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
> GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
> isolate_lru_pages was basically skipping GiBs of pages while holding the
> LRU spinlock with interrupt disabled.
> 
> On further inspection, it seems like there are two issues:
> 
> 1) If the kswapd on the return from balance_pgdat() could not sleep
> (i.e. node is still unbalanced), the classzone_idx is unintentionally
> set to 0  and the whole reclaim cycle of kswapd will try to reclaim
> only the lowest and smallest zone while traversing the whole memory.
> 
> 2) Fundamentally isolate_lru_pages() is really bad when the allocation
> has woken kswapd for a smaller zone on a very large machine running very
> large jobs. It can hoard the LRU spinlock while skipping over 100s of
> GiBs of pages.
> 
> This patch only fixes the (1). The (2) needs a more fundamental solution.
> To fix (1), in the kswapd context, if pgdat->kswapd_classzone_idx is
> invalid use the classzone_idx of the previous kswapd loop otherwise use
> the one the waker has requested.
> 
> Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
> due to mismatched classzone_idx")
> 
> Signed-off-by: Shakeel Butt <shakeelb@google.com>

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-07-03  8:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-01 20:18 [PATCH v2] mm, vmscan: prevent useless kswapd loops Shakeel Butt
2019-07-01 21:49 ` Yang Shi
2019-07-03  8:38 ` Mel Gorman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).