All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Vlastimil Babka <vbabka@suse.cz>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Sangseok Lee <sangseok.lee@lge.com>
Subject: Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
Date: Fri, 7 Oct 2016 11:09:17 +0200	[thread overview]
Message-ID: <20161007090917.GA18447@dhcp22.suse.cz> (raw)
In-Reply-To: <1475819136-24358-4-git-send-email-minchan@kernel.org>

On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> After fixing the race of highatomic page count, I still encounter
> OOM with many free memory reserved as highatomic.
> 
> One of reason in my testing was we unreserve free pages only if
> reclaim has progress. Otherwise, we cannot have chance to unreseve.
> 
> Other problem after fixing it was it doesn't guarantee every pages
> unreserving of highatomic pageblock because it just release *a*
> pageblock which could have few free pages so other context could
> steal it easily so that the process stucked with direct reclaim
> finally can encounter OOM although there are free pages which can
> be unreserved.
> 
> This patch changes the logic so that it unreserves pageblocks with
> no_progress_loop proportionally. IOW, in first retrial of reclaim,
> it will try to unreserve a pageblock. In second retrial of reclaim,
> it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> and finally all reserved pageblock before the OOM.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 44 insertions(+), 13 deletions(-)

This sounds much more complex then it needs to be IMHO. Why something as
simple as thhe following wouldn't work? Please note that I even didn't
try to compile this. It is just give you an idea.
---
 mm/page_alloc.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73f60ad6315f..e575a4f38555 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+		bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		if (!zone->nr_reserved_highatomic)
+			continue;
+
+		/*
+		 * Preserve at least one pageblock unless we are really running
+		 * out of memory
+		 */
+		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			set_pageblock_migratetype(page, ac->migratetype);
 			move_freepages_block(zone, page, ac->migratetype);
 			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			return true;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return false;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3302,7 +3311,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3418,9 +3427,14 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	/*
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
+	 * Do last desparate attempt to throw high atomic reserves away
+	 * before we give up
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		if (unreserve_highatomic_pageblock(ac, true))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Vlastimil Babka <vbabka@suse.cz>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Sangseok Lee <sangseok.lee@lge.com>
Subject: Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
Date: Fri, 7 Oct 2016 11:09:17 +0200	[thread overview]
Message-ID: <20161007090917.GA18447@dhcp22.suse.cz> (raw)
In-Reply-To: <1475819136-24358-4-git-send-email-minchan@kernel.org>

On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> After fixing the race of highatomic page count, I still encounter
> OOM with many free memory reserved as highatomic.
> 
> One of reason in my testing was we unreserve free pages only if
> reclaim has progress. Otherwise, we cannot have chance to unreseve.
> 
> Other problem after fixing it was it doesn't guarantee every pages
> unreserving of highatomic pageblock because it just release *a*
> pageblock which could have few free pages so other context could
> steal it easily so that the process stucked with direct reclaim
> finally can encounter OOM although there are free pages which can
> be unreserved.
> 
> This patch changes the logic so that it unreserves pageblocks with
> no_progress_loop proportionally. IOW, in first retrial of reclaim,
> it will try to unreserve a pageblock. In second retrial of reclaim,
> it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> and finally all reserved pageblock before the OOM.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 44 insertions(+), 13 deletions(-)

This sounds much more complex then it needs to be IMHO. Why something as
simple as thhe following wouldn't work? Please note that I even didn't
try to compile this. It is just give you an idea.
---
 mm/page_alloc.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73f60ad6315f..e575a4f38555 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+		bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		if (!zone->nr_reserved_highatomic)
+			continue;
+
+		/*
+		 * Preserve at least one pageblock unless we are really running
+		 * out of memory
+		 */
+		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			set_pageblock_migratetype(page, ac->migratetype);
 			move_freepages_block(zone, page, ac->migratetype);
 			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			return true;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return false;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3302,7 +3311,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3418,9 +3427,14 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	/*
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
+	 * Do last desparate attempt to throw high atomic reserves away
+	 * before we give up
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		if (unreserve_highatomic_pageblock(ac, true))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-10-07  9:09 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-07  5:45 [PATCH 0/4] use up highorder free pages before OOM Minchan Kim
2016-10-07  5:45 ` Minchan Kim
2016-10-07  5:45 ` [PATCH 1/4] mm: adjust reserved highatomic count Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07 12:30   ` Vlastimil Babka
2016-10-07 12:30     ` Vlastimil Babka
2016-10-07 14:29     ` Minchan Kim
2016-10-07 14:29       ` Minchan Kim
2016-10-10  6:57       ` Vlastimil Babka
2016-10-10  6:57         ` Vlastimil Babka
2016-10-11  4:19         ` Minchan Kim
2016-10-11  4:19           ` Minchan Kim
2016-10-11  9:40           ` Vlastimil Babka
2016-10-11  9:40             ` Vlastimil Babka
2016-10-12  5:36           ` Mel Gorman
2016-10-12  5:36             ` Mel Gorman
2016-10-07  5:45 ` [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07 12:44   ` Vlastimil Babka
2016-10-07 12:44     ` Vlastimil Babka
2016-10-07 14:30     ` Minchan Kim
2016-10-07 14:30       ` Minchan Kim
2016-10-12  5:36   ` Mel Gorman
2016-10-12  5:36     ` Mel Gorman
2016-10-07  5:45 ` [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07  9:09   ` Michal Hocko [this message]
2016-10-07  9:09     ` Michal Hocko
2016-10-07 14:43     ` Minchan Kim
2016-10-07 14:43       ` Minchan Kim
2016-10-10  7:41       ` Michal Hocko
2016-10-10  7:41         ` Michal Hocko
2016-10-11  5:01         ` Minchan Kim
2016-10-11  5:01           ` Minchan Kim
2016-10-11  6:50           ` Michal Hocko
2016-10-11  6:50             ` Michal Hocko
2016-10-11  7:09             ` Minchan Kim
2016-10-11  7:09               ` Minchan Kim
2016-10-11  7:26               ` Michal Hocko
2016-10-11  7:26                 ` Michal Hocko
2016-10-11  7:37                 ` Minchan Kim
2016-10-11  7:37                   ` Minchan Kim
2016-10-11  8:01                   ` Michal Hocko
2016-10-11  8:01                     ` Michal Hocko
2016-10-07  5:45 ` [PATCH 4/4] mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07  9:16 ` [PATCH 0/4] use up highorder free pages before OOM Michal Hocko
2016-10-07  9:16   ` Michal Hocko
2016-10-07 15:04   ` Minchan Kim
2016-10-07 15:04     ` Minchan Kim
2016-10-10  7:47     ` Michal Hocko
2016-10-10  7:47       ` Michal Hocko
2016-10-11  5:06       ` Minchan Kim
2016-10-11  5:06         ` Minchan Kim
2016-10-11  6:53         ` Michal Hocko
2016-10-11  6:53           ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161007090917.GA18447@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=minchan@kernel.org \
    --cc=sangseok.lee@lge.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.