From: Joonsoo Kim <js1304@gmail.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Linux Memory Management List <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
kernel-team@lge.com, Vlastimil Babka <vbabka@suse.cz>,
Christoph Hellwig <hch@infradead.org>,
Roman Gushchin <guro@fb.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: Re: [PATCH v3 4/8] mm/hugetlb: make hugetlb migration callback CMA aware
Date: Tue, 30 Jun 2020 15:30:04 +0900 [thread overview]
Message-ID: <CAAmzW4PFEEs0FGe+XMHzRdXr0LpdF3TZZG2L3E+opRyZWDZ48A@mail.gmail.com> (raw)
In-Reply-To: <20200629075510.GA32461@dhcp22.suse.cz>
2020년 6월 29일 (월) 오후 4:55, Michal Hocko <mhocko@kernel.org>님이 작성:
>
> On Mon 29-06-20 15:27:25, Joonsoo Kim wrote:
> [...]
> > Solution that Introduces a new
> > argument doesn't cause this problem while avoiding CMA regions.
>
> My primary argument is that there is no real reason to treat hugetlb
> dequeing somehow differently. So if we simply exclude __GFP_MOVABLE for
> _any_ other allocation then this certainly has some drawbacks on the
> usable memory for the migration target and it can lead to allocation
> failures (especially on movable_node setups where the amount of movable
> memory might be really high) and therefore longterm gup failures. And
> yes those failures might be premature. But my point is that the behavior
> would be _consistent_. So a user wouldn't see random failures for some
> types of pages while a success for others.
Hmm... I don't agree with your argument. Excluding __GFP_MOVABLE is
a *work-around* way to exclude CMA regions. Implementation for dequeuing
in this patch is a right way to exclude CMA regions. Why do we use a work-around
for this case? To be consistent is important but it's only meaningful
if it is correct.
It should not disrupt to make a better code. And, dequeing is already a special
process that is only available for hugetlb. I think that using
different (correct)
implementations there doesn't break any consistency.
> Let's have a look at this patch. It is simply working that around the
> restriction for a very limited types of pages - only hugetlb pages
> which have reserves in non-cma movable pools. I would claim that many
> setups will simply not have many (if any) spare hugetlb pages in the
> pool except for temporary time periods when a workload is (re)starting
> because this would be effectively a wasted memory.
This can not be a stopper to make the correct code.
> The patch is adding a special case flag to claim what the code already
> does by memalloc_nocma_{save,restore} API so the information is already
> there. Sorry I didn't bring this up earlier but I have completely forgot
> about its existence. With that one in place I do agree that dequeing
> needs a fixup but that should be something like the following instead.
Thanks for letting me know. I don't know it until now. It looks like it's
better to use this API rather than introducing a new argument.
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 57ece74e3aae..c1595b1d36f3 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1092,10 +1092,14 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask,
> /* Movability of hugepages depends on migration support. */
> static inline gfp_t htlb_alloc_mask(struct hstate *h)
> {
> + gfp_t gfp;
> +
> if (hugepage_movable_supported(h))
> - return GFP_HIGHUSER_MOVABLE;
> + gfp = GFP_HIGHUSER_MOVABLE;
> else
> - return GFP_HIGHUSER;
> + gfp = GFP_HIGHUSER;
> +
> + return current_gfp_context(gfp);
> }
>
> static struct page *dequeue_huge_page_vma(struct hstate *h,
>
> If we even fix this general issue for other allocations and allow a
> better CMA exclusion then it would be implemented consistently for
> everybody.
Yes, I have reviewed the memalloc_nocma_{} APIs and found the better way
for CMA exclusion. I will do it after this patch is finished.
> Does this make more sense to you are we still not on the same page wrt
> to the actual problem?
Yes, but we have different opinions about it. As said above, I will make
a patch for better CMA exclusion after this patchset. It will make
code consistent.
I'd really appreciate it if you wait until then.
Thanks.
next prev parent reply other threads:[~2020-06-30 6:30 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-23 6:13 [PATCH v3 0/8] clean-up the migration target allocation functions js1304
2020-06-23 6:13 ` [PATCH v3 1/8] mm/page_isolation: prefer the node of the source page js1304
2020-06-23 6:13 ` [PATCH v3 2/8] mm/migrate: move migration helper from .h to .c js1304
2020-06-23 6:13 ` [PATCH v3 3/8] mm/hugetlb: unify migration callbacks js1304
2020-06-24 21:18 ` Mike Kravetz
2020-06-25 11:26 ` Michal Hocko
2020-06-26 4:02 ` Joonsoo Kim
2020-07-02 16:13 ` Vlastimil Babka
2020-07-03 0:55 ` Joonsoo Kim
2020-06-23 6:13 ` [PATCH v3 4/8] mm/hugetlb: make hugetlb migration callback CMA aware js1304
2020-06-25 11:54 ` Michal Hocko
2020-06-26 4:49 ` Joonsoo Kim
2020-06-26 7:23 ` Michal Hocko
2020-06-29 6:27 ` Joonsoo Kim
2020-06-29 7:55 ` Michal Hocko
2020-06-30 6:30 ` Joonsoo Kim [this message]
2020-06-30 6:42 ` Michal Hocko
2020-06-30 7:22 ` Joonsoo Kim
2020-06-30 16:37 ` Mike Kravetz
2020-06-23 6:13 ` [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function js1304
2020-06-25 12:05 ` Michal Hocko
2020-06-26 5:02 ` Joonsoo Kim
2020-06-26 7:33 ` Michal Hocko
2020-06-29 6:41 ` Joonsoo Kim
2020-06-29 8:03 ` Michal Hocko
2020-06-30 7:19 ` Joonsoo Kim
2020-07-03 15:25 ` Vlastimil Babka
2020-06-23 6:13 ` [PATCH v3 6/8] mm/gup: use a standard migration target allocation callback js1304
2020-06-25 12:08 ` Michal Hocko
2020-06-26 5:03 ` Joonsoo Kim
2020-07-03 15:56 ` Vlastimil Babka
2020-07-06 8:34 ` Joonsoo Kim
2020-06-23 6:13 ` [PATCH v3 7/8] mm/mempolicy: " js1304
2020-06-25 12:09 ` Michal Hocko
2020-07-03 15:59 ` Vlastimil Babka
[not found] ` <20200708012044.GC992@lca.pw>
2020-07-08 6:45 ` Michal Hocko
2020-10-08 3:21 ` Hugh Dickins
2020-10-08 17:29 ` Mike Kravetz
2020-10-09 5:50 ` Hugh Dickins
2020-10-09 17:42 ` Mike Kravetz
2020-10-09 22:23 ` Hugh Dickins
2020-10-10 0:25 ` Mike Kravetz
2020-06-23 6:13 ` [PATCH v3 8/8] mm/page_alloc: remove a wrapper for alloc_migration_target() js1304
2020-06-25 12:10 ` Michal Hocko
2020-07-03 16:18 ` Vlastimil Babka
2020-07-06 8:44 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAmzW4PFEEs0FGe+XMHzRdXr0LpdF3TZZG2L3E+opRyZWDZ48A@mail.gmail.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hch@infradead.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=n-horiguchi@ah.jp.nec.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).