From: Vlastimil Babka <vbabka@suse.cz>
To: Mike Kravetz <mike.kravetz@oracle.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Hillf Danton <hdanton@sina.com>, Michal Hocko <mhocko@kernel.org>,
Mel Gorman <mgorman@suse.de>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrea Arcangeli <aarcange@redhat.com>,
David Rientjes <rientjes@google.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 1/3] mm, reclaim: make should_continue_reclaim perform dryrun detection
Date: Mon, 5 Aug 2019 10:42:59 +0200 [thread overview]
Message-ID: <bb16d3f0-0984-be32-4346-358abad92c4c@suse.cz> (raw)
In-Reply-To: <20190802223930.30971-2-mike.kravetz@oracle.com>
On 8/3/19 12:39 AM, Mike Kravetz wrote:
> From: Hillf Danton <hdanton@sina.com>
>
> Address the issue of should_continue_reclaim continuing true too often
> for __GFP_RETRY_MAYFAIL attempts when !nr_reclaimed and nr_scanned.
> This could happen during hugetlb page allocation causing stalls for
> minutes or hours.
>
> We can stop reclaiming pages if compaction reports it can make a progress.
> A code reshuffle is needed to do that.
> And it has side-effects, however,
> with allocation latencies in other cases but that would come at the cost
> of potential premature reclaim which has consequences of itself.
Based on Mel's longer explanation, can we clarify the wording here? e.g.:
There might be side-effect for other high-order allocations that would
potentially benefit from more reclaim before compaction for them to be
faster and less likely to stall, but the consequences of
premature/over-reclaim are considered worse.
> We can also bail out of reclaiming pages if we know that there are not
> enough inactive lru pages left to satisfy the costly allocation.
>
> We can give up reclaiming pages too if we see dryrun occur, with the
> certainty of plenty of inactive pages. IOW with dryrun detected, we are
> sure we have reclaimed as many pages as we could.
>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Hillf Danton <hdanton@sina.com>
> Tested-by: Mike Kravetz <mike.kravetz@oracle.com>
> Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
I will send some followup cleanup.
There should be also Mike's SOB?
> ---
> mm/vmscan.c | 28 +++++++++++++++-------------
> 1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 47aa2158cfac..a386c5351592 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2738,18 +2738,6 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
> return false;
> }
>
> - /*
> - * If we have not reclaimed enough pages for compaction and the
> - * inactive lists are large enough, continue reclaiming
> - */
> - pages_for_compaction = compact_gap(sc->order);
> - inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE);
> - if (get_nr_swap_pages() > 0)
> - inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON);
> - if (sc->nr_reclaimed < pages_for_compaction &&
> - inactive_lru_pages > pages_for_compaction)
> - return true;
> -
> /* If compaction would go ahead or the allocation would succeed, stop */
> for (z = 0; z <= sc->reclaim_idx; z++) {
> struct zone *zone = &pgdat->node_zones[z];
> @@ -2765,7 +2753,21 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
> ;
> }
> }
> - return true;
> +
> + /*
> + * If we have not reclaimed enough pages for compaction and the
> + * inactive lists are large enough, continue reclaiming
> + */
> + pages_for_compaction = compact_gap(sc->order);
> + inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE);
> + if (get_nr_swap_pages() > 0)
> + inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON);
> +
> + return inactive_lru_pages > pages_for_compaction &&
> + /*
> + * avoid dryrun with plenty of inactive pages
> + */
> + nr_scanned && nr_reclaimed;
> }
>
> static bool pgdat_memcg_congested(pg_data_t *pgdat, struct mem_cgroup *memcg)
>
next prev parent reply other threads:[~2019-08-05 8:43 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-02 22:39 [PATCH 0/3] address hugetlb page allocation stalls Mike Kravetz
2019-08-02 22:39 ` [PATCH 1/3] mm, reclaim: make should_continue_reclaim perform dryrun detection Mike Kravetz
2019-08-05 8:42 ` Vlastimil Babka [this message]
2019-08-05 10:57 ` Vlastimil Babka
2019-08-05 16:58 ` Mike Kravetz
2019-08-05 18:34 ` Vlastimil Babka
2019-08-05 16:54 ` Mike Kravetz
2019-08-02 22:39 ` [PATCH 2/3] mm, compaction: raise compaction priority after it withdrawns Mike Kravetz
2019-08-05 9:14 ` Vlastimil Babka
2019-08-02 22:39 ` [PATCH 3/3] hugetlbfs: don't retry when pool page allocations start to fail Mike Kravetz
2019-08-05 9:28 ` Vlastimil Babka
2019-08-05 17:12 ` Mike Kravetz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bb16d3f0-0984-be32-4346-358abad92c4c@suse.cz \
--to=vbabka@suse.cz \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).