From: Michal Hocko <mhocko@kernel.org> To: Minchan Kim <minchan@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org>, Mel Gorman <mgorman@techsingularity.net>, Vlastimil Babka <vbabka@suse.cz>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sangseok Lee <sangseok.lee@lge.com> Subject: Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM Date: Mon, 10 Oct 2016 09:41:40 +0200 [thread overview] Message-ID: <20161010074139.GB20420@dhcp22.suse.cz> (raw) In-Reply-To: <20161007144345.GC3060@bbox> On Fri 07-10-16 23:43:45, Minchan Kim wrote: > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote: > > On Fri 07-10-16 14:45:35, Minchan Kim wrote: > > > After fixing the race of highatomic page count, I still encounter > > > OOM with many free memory reserved as highatomic. > > > > > > One of reason in my testing was we unreserve free pages only if > > > reclaim has progress. Otherwise, we cannot have chance to unreseve. > > > > > > Other problem after fixing it was it doesn't guarantee every pages > > > unreserving of highatomic pageblock because it just release *a* > > > pageblock which could have few free pages so other context could > > > steal it easily so that the process stucked with direct reclaim > > > finally can encounter OOM although there are free pages which can > > > be unreserved. > > > > > > This patch changes the logic so that it unreserves pageblocks with > > > no_progress_loop proportionally. IOW, in first retrial of reclaim, > > > it will try to unreserve a pageblock. In second retrial of reclaim, > > > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock > > > and finally all reserved pageblock before the OOM. > > > > > > Signed-off-by: Minchan Kim <minchan@kernel.org> > > > --- > > > mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++------------- > > > 1 file changed, 44 insertions(+), 13 deletions(-) > > > > This sounds much more complex then it needs to be IMHO. Why something as > > simple as thhe following wouldn't work? Please note that I even didn't > > try to compile this. It is just give you an idea. > > --- > > mm/page_alloc.c | 26 ++++++++++++++++++++------ > > 1 file changed, 20 insertions(+), 6 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 73f60ad6315f..e575a4f38555 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, > > * intense memory pressure but failed atomic allocations should be easier > > * to recover from than an OOM. > > */ > > -static void unreserve_highatomic_pageblock(const struct alloc_context *ac) > > +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, > > + bool force) > > { > > struct zonelist *zonelist = ac->zonelist; > > unsigned long flags; > > @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac) > > > > for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx, > > ac->nodemask) { > > - /* Preserve at least one pageblock */ > > - if (zone->nr_reserved_highatomic <= pageblock_nr_pages) > > + if (!zone->nr_reserved_highatomic) > > + continue; > > + > > + /* > > + * Preserve at least one pageblock unless we are really running > > + * out of memory > > + */ > > + if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages) > > continue; > > > > spin_lock_irqsave(&zone->lock, flags); > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac) > > set_pageblock_migratetype(page, ac->migratetype); > > move_freepages_block(zone, page, ac->migratetype); > > spin_unlock_irqrestore(&zone->lock, flags); > > - return; > > + return true; > > Such cut-off makes reserved pageblock remained before the OOM. > We call it as premature OOM kill. Not sure I understand. The above should get rid of all atomic reserves before we go OOM. We can do it all at once but that sounds too aggressive to me. If we just do one at the time we have a chance to keep some reserves if the OOM situation is really ephemeral. Does this patch work in your usecase? > If you feel it's rather complex, simply, we can drain *every* > reserved pages when no_progress_loops is greater than MAX_RECLAIM_RETRIES. > Do you want it? > > It's rather conservative approach to keep highatomic pageblocks. > Anyway, I think it's matter of policy and my goal is just to use up > every reserved pageblock before OOM so anyone never say > "Hey, enough free pages which is greater than min watermark but > why I should see the OOM with GFP_KERNEL 4K allocation". I agree that triggering OOM while we are above min watermarks is less than optimial and we should think about how to fix it. -- Michal Hocko SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org> To: Minchan Kim <minchan@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org>, Mel Gorman <mgorman@techsingularity.net>, Vlastimil Babka <vbabka@suse.cz>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sangseok Lee <sangseok.lee@lge.com> Subject: Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM Date: Mon, 10 Oct 2016 09:41:40 +0200 [thread overview] Message-ID: <20161010074139.GB20420@dhcp22.suse.cz> (raw) In-Reply-To: <20161007144345.GC3060@bbox> On Fri 07-10-16 23:43:45, Minchan Kim wrote: > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote: > > On Fri 07-10-16 14:45:35, Minchan Kim wrote: > > > After fixing the race of highatomic page count, I still encounter > > > OOM with many free memory reserved as highatomic. > > > > > > One of reason in my testing was we unreserve free pages only if > > > reclaim has progress. Otherwise, we cannot have chance to unreseve. > > > > > > Other problem after fixing it was it doesn't guarantee every pages > > > unreserving of highatomic pageblock because it just release *a* > > > pageblock which could have few free pages so other context could > > > steal it easily so that the process stucked with direct reclaim > > > finally can encounter OOM although there are free pages which can > > > be unreserved. > > > > > > This patch changes the logic so that it unreserves pageblocks with > > > no_progress_loop proportionally. IOW, in first retrial of reclaim, > > > it will try to unreserve a pageblock. In second retrial of reclaim, > > > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock > > > and finally all reserved pageblock before the OOM. > > > > > > Signed-off-by: Minchan Kim <minchan@kernel.org> > > > --- > > > mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++------------- > > > 1 file changed, 44 insertions(+), 13 deletions(-) > > > > This sounds much more complex then it needs to be IMHO. Why something as > > simple as thhe following wouldn't work? Please note that I even didn't > > try to compile this. It is just give you an idea. > > --- > > mm/page_alloc.c | 26 ++++++++++++++++++++------ > > 1 file changed, 20 insertions(+), 6 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 73f60ad6315f..e575a4f38555 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, > > * intense memory pressure but failed atomic allocations should be easier > > * to recover from than an OOM. > > */ > > -static void unreserve_highatomic_pageblock(const struct alloc_context *ac) > > +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, > > + bool force) > > { > > struct zonelist *zonelist = ac->zonelist; > > unsigned long flags; > > @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac) > > > > for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx, > > ac->nodemask) { > > - /* Preserve at least one pageblock */ > > - if (zone->nr_reserved_highatomic <= pageblock_nr_pages) > > + if (!zone->nr_reserved_highatomic) > > + continue; > > + > > + /* > > + * Preserve at least one pageblock unless we are really running > > + * out of memory > > + */ > > + if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages) > > continue; > > > > spin_lock_irqsave(&zone->lock, flags); > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac) > > set_pageblock_migratetype(page, ac->migratetype); > > move_freepages_block(zone, page, ac->migratetype); > > spin_unlock_irqrestore(&zone->lock, flags); > > - return; > > + return true; > > Such cut-off makes reserved pageblock remained before the OOM. > We call it as premature OOM kill. Not sure I understand. The above should get rid of all atomic reserves before we go OOM. We can do it all at once but that sounds too aggressive to me. If we just do one at the time we have a chance to keep some reserves if the OOM situation is really ephemeral. Does this patch work in your usecase? > If you feel it's rather complex, simply, we can drain *every* > reserved pages when no_progress_loops is greater than MAX_RECLAIM_RETRIES. > Do you want it? > > It's rather conservative approach to keep highatomic pageblocks. > Anyway, I think it's matter of policy and my goal is just to use up > every reserved pageblock before OOM so anyone never say > "Hey, enough free pages which is greater than min watermark but > why I should see the OOM with GFP_KERNEL 4K allocation". I agree that triggering OOM while we are above min watermarks is less than optimial and we should think about how to fix it. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-10-10 7:41 UTC|newest] Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-10-07 5:45 [PATCH 0/4] use up highorder free pages before OOM Minchan Kim 2016-10-07 5:45 ` Minchan Kim 2016-10-07 5:45 ` [PATCH 1/4] mm: adjust reserved highatomic count Minchan Kim 2016-10-07 5:45 ` Minchan Kim 2016-10-07 12:30 ` Vlastimil Babka 2016-10-07 12:30 ` Vlastimil Babka 2016-10-07 14:29 ` Minchan Kim 2016-10-07 14:29 ` Minchan Kim 2016-10-10 6:57 ` Vlastimil Babka 2016-10-10 6:57 ` Vlastimil Babka 2016-10-11 4:19 ` Minchan Kim 2016-10-11 4:19 ` Minchan Kim 2016-10-11 9:40 ` Vlastimil Babka 2016-10-11 9:40 ` Vlastimil Babka 2016-10-12 5:36 ` Mel Gorman 2016-10-12 5:36 ` Mel Gorman 2016-10-07 5:45 ` [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic Minchan Kim 2016-10-07 5:45 ` Minchan Kim 2016-10-07 12:44 ` Vlastimil Babka 2016-10-07 12:44 ` Vlastimil Babka 2016-10-07 14:30 ` Minchan Kim 2016-10-07 14:30 ` Minchan Kim 2016-10-12 5:36 ` Mel Gorman 2016-10-12 5:36 ` Mel Gorman 2016-10-07 5:45 ` [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM Minchan Kim 2016-10-07 5:45 ` Minchan Kim 2016-10-07 9:09 ` Michal Hocko 2016-10-07 9:09 ` Michal Hocko 2016-10-07 14:43 ` Minchan Kim 2016-10-07 14:43 ` Minchan Kim 2016-10-10 7:41 ` Michal Hocko [this message] 2016-10-10 7:41 ` Michal Hocko 2016-10-11 5:01 ` Minchan Kim 2016-10-11 5:01 ` Minchan Kim 2016-10-11 6:50 ` Michal Hocko 2016-10-11 6:50 ` Michal Hocko 2016-10-11 7:09 ` Minchan Kim 2016-10-11 7:09 ` Minchan Kim 2016-10-11 7:26 ` Michal Hocko 2016-10-11 7:26 ` Michal Hocko 2016-10-11 7:37 ` Minchan Kim 2016-10-11 7:37 ` Minchan Kim 2016-10-11 8:01 ` Michal Hocko 2016-10-11 8:01 ` Michal Hocko 2016-10-07 5:45 ` [PATCH 4/4] mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC Minchan Kim 2016-10-07 5:45 ` Minchan Kim 2016-10-07 9:16 ` [PATCH 0/4] use up highorder free pages before OOM Michal Hocko 2016-10-07 9:16 ` Michal Hocko 2016-10-07 15:04 ` Minchan Kim 2016-10-07 15:04 ` Minchan Kim 2016-10-10 7:47 ` Michal Hocko 2016-10-10 7:47 ` Michal Hocko 2016-10-11 5:06 ` Minchan Kim 2016-10-11 5:06 ` Minchan Kim 2016-10-11 6:53 ` Michal Hocko 2016-10-11 6:53 ` Michal Hocko
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20161010074139.GB20420@dhcp22.suse.cz \ --to=mhocko@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=iamjoonsoo.kim@lge.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@techsingularity.net \ --cc=minchan@kernel.org \ --cc=sangseok.lee@lge.com \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.