All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Muchun Song <songmuchun@bytedance.com>
Cc: "Jonathan Corbet" <corbet@lwn.net>,
	"Mike Kravetz" <mike.kravetz@oracle.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	dave.hansen@linux.intel.com, luto@kernel.org,
	"Peter Zijlstra" <peterz@infradead.org>,
	viro@zeniv.linux.org.uk,
	"Andrew Morton" <akpm@linux-foundation.org>,
	paulmck@kernel.org, mchehab+huawei@kernel.org,
	pawan.kumar.gupta@linux.intel.com,
	"Randy Dunlap" <rdunlap@infradead.org>,
	oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de,
	"Mina Almasry" <almasrymina@google.com>,
	"David Rientjes" <rientjes@google.com>,
	"Matthew Wilcox" <willy@infradead.org>,
	"Oscar Salvador" <osalvador@suse.de>,
	"Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>,
	"David Hildenbrand" <david@redhat.com>,
	"HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	"Xiongchun duan" <duanxiongchun@bytedance.com>,
	linux-doc@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	"Linux Memory Management List" <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [External] Re: [PATCH v15 4/8] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
Date: Mon, 15 Feb 2021 14:19:28 +0100	[thread overview]
Message-ID: <YCp04NVBZpZZ5k7G@dhcp22.suse.cz> (raw)
In-Reply-To: <CAMZfGtWd_ZaXtiEdMKhpnAHDw5CTm-CSPSXW+GfKhyX5qQK=Og@mail.gmail.com>

On Mon 15-02-21 20:44:57, Muchun Song wrote:
> On Mon, Feb 15, 2021 at 8:18 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 15-02-21 20:00:07, Muchun Song wrote:
> > > On Mon, Feb 15, 2021 at 7:51 PM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > On Mon, Feb 15, 2021 at 6:33 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > >
> > > > > On Mon 15-02-21 18:05:06, Muchun Song wrote:
> > > > > > On Fri, Feb 12, 2021 at 11:32 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > > [...]
> > > > > > > > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > > > > > > > +{
> > > > > > > > +     int ret;
> > > > > > > > +     unsigned long vmemmap_addr = (unsigned long)head;
> > > > > > > > +     unsigned long vmemmap_end, vmemmap_reuse;
> > > > > > > > +
> > > > > > > > +     if (!free_vmemmap_pages_per_hpage(h))
> > > > > > > > +             return 0;
> > > > > > > > +
> > > > > > > > +     vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> > > > > > > > +     vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> > > > > > > > +     vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> > > > > > > > +
> > > > > > > > +     /*
> > > > > > > > +      * The pages which the vmemmap virtual address range [@vmemmap_addr,
> > > > > > > > +      * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> > > > > > > > +      * the range is mapped to the page which @vmemmap_reuse is mapped to.
> > > > > > > > +      * When a HugeTLB page is freed to the buddy allocator, previously
> > > > > > > > +      * discarded vmemmap pages must be allocated and remapping.
> > > > > > > > +      */
> > > > > > > > +     ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> > > > > > > > +                               GFP_ATOMIC | __GFP_NOWARN | __GFP_THISNODE);
> > > > > > >
> > > > > > > I do not think that this is a good allocation mode. GFP_ATOMIC is a non
> > > > > > > sleeping allocation and a medium memory pressure might cause it to
> > > > > > > fail prematurely. I do not think this is really an atomic context which
> > > > > > > couldn't afford memory reclaim. I also do not think we want to grant
> > > > > >
> > > > > > Because alloc_huge_page_vmemmap is called under hugetlb_lock
> > > > > > now. So using GFP_ATOMIC indeed makes the code more simpler.
> > > > >
> > > > > You can have a preallocated list of pages prior taking the lock.
> > > >
> > > > A discussion about this can refer to here:
> > > >
> > > > https://patchwork.kernel.org/project/linux-mm/patch/20210117151053.24600-5-songmuchun@bytedance.com/
> > > >
> > > > > Moreover do we want to manipulate vmemmaps from under spinlock in
> > > > > general. I have to say I have missed that detail when reviewing. Need to
> > > > > think more.
> > > > >
> > > > > > From the document of the kernel, I learned that __GFP_NOMEMALLOC
> > > > > > can be used to explicitly forbid access to emergency reserves. So if
> > > > > > we do not want to use the reserve memory. How about replacing it to
> > > > > >
> > > > > > GFP_ATOMIC | __GFP_NOMEMALLOC | __GFP_NOWARN | __GFP_THISNODE
> > > > >
> > > > > The whole point of GFP_ATOMIC is to grant access to memory reserves so
> > > > > the above is quite dubious. If you do not want access to memory reserves
> > > >
> > > > Look at the code of gfp_to_alloc_flags().
> > > >
> > > > static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask)
> > > > {
> > > >         [...]
> > > >         if (gfp_mask & __GFP_ATOMIC) {
> > > >         /*
> > > >          * Not worth trying to allocate harder for __GFP_NOMEMALLOC even
> > > >          * if it can't schedule.
> > > >          */
> > > >         if (!(gfp_mask & __GFP_NOMEMALLOC))
> > > >                 alloc_flags |= ALLOC_HARDER;
> > > >        [...]
> > > > }
> > > >
> > > > Seems to allow this operation (GFP_ATOMIC | __GFP_NOMEMALLOC).
> >
> > Please read my response again more carefully. I am not claiming that
> > combination is not allowed. I have said it doesn't make any sense in
> > this context.
> 
> I see you are worried that using GFP_ATOMIC will use reverse memory
> unlimited. So I think that __GFP_NOMEMALLOC may be suitable for us.
> Sorry, I may not understand the point you said. What I missed?

OK, let me try to explain again. GFP_ATOMIC is not only a non-sleeping
allocation request. It also grants access to memory reserves. The later
is a bit more involved because there are more layers of memory reserves
to access but that is not really important. Non-sleeping semantic can be
achieved by GFP_NOWAIT which will not grant access to reserves unless
explicitly stated - e.g. by __GFP_HIGH or __GFP_ATOMIC.
Is that more clear?

Now again why I do not think access to memory reserves is suitable.
Hugetlb pages can be released in a large batches and that might cause a
peak depletion of memory reserves which are normally used by other
consumers as well. Other GFP_ATOMIC users might see allocation failures.
Those shouldn't be really fatal as nobody should be relying on those and
a failure usually mean a hand over to a different, less constrained,
context. So this concern is more about a more well behaved behavior from
the hugetlb side than a correctness.
Is that more clear?

There shouldn't be any real reason why the memory allocation for
vmemmaps, or handling vmemmap in general, has to be done from within the
hugetlb lock and therefore requiring a non-sleeping semantic. All that
can be deferred to a more relaxed context. If you want to make a
GFP_NOWAIT optimistic attempt in the direct free path then no problem
but you have to expect failures under memory pressure. If you want to
have a more robust allocation request then you have to go outside of the
spin lock and use GFP_KERNEL | __GFP_NORETRY or GFP_KERNEL |
__GFP_RETRY_MAYFAIL depending on how hard you want to try.
__GFP_THISNODE makes a slight difference here but something that I would
recommend not depending on.
Is that more clear?
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2021-02-15 13:20 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-08  8:50 [PATCH v15 0/8] Free some vmemmap pages of HugeTLB page Muchun Song
2021-02-08  8:50 ` [PATCH v15 1/8] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
2021-02-08  8:50 ` [PATCH v15 2/8] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2021-02-08  8:50 ` [PATCH v15 3/8] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
2021-02-08  8:50 ` [PATCH v15 4/8] mm: hugetlb: alloc " Muchun Song
2021-02-11 18:05   ` Mike Kravetz
2021-02-12 14:15     ` David Hildenbrand
2021-02-12 15:32   ` Michal Hocko
2021-02-15 10:05     ` [External] " Muchun Song
2021-02-15 10:33       ` Michal Hocko
2021-02-15 11:51         ` Muchun Song
2021-02-15 12:00           ` Muchun Song
2021-02-15 12:18             ` Michal Hocko
2021-02-15 12:44               ` Muchun Song
2021-02-15 13:19                 ` Michal Hocko [this message]
2021-02-15 15:36                   ` Muchun Song
2021-02-15 16:27                     ` Michal Hocko
2021-02-15 17:48                       ` Muchun Song
2021-02-15 18:19                         ` Muchun Song
2021-02-15 19:39                           ` Michal Hocko
2021-02-16  4:34                             ` Muchun Song
2021-02-16  8:15                               ` Michal Hocko
2021-02-16  8:20                                 ` David Hildenbrand
2021-02-16  9:03                                 ` Muchun Song
2021-02-15 19:02                         ` Michal Hocko
2021-02-16  8:13                           ` David Hildenbrand
2021-02-16  8:21                             ` Michal Hocko
2021-02-16 19:44                       ` Mike Kravetz
2021-02-17  8:13                         ` Michal Hocko
2021-02-18  1:00                           ` Mike Kravetz
2021-02-18  3:20                             ` Muchun Song
2021-02-18  8:21                               ` Michal Hocko
2021-02-15 12:24           ` Michal Hocko
2021-02-08  8:50 ` [PATCH v15 5/8] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
2021-02-08  8:50 ` [PATCH v15 6/8] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2021-02-08  8:50 ` [PATCH v15 7/8] mm: hugetlb: gather discrete indexes of tail page Muchun Song
2021-02-08  8:50 ` [PATCH v15 8/8] mm: hugetlb: optimize the code with the help of the compiler Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YCp04NVBZpZZ5k7G@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=anshuman.khandual@arm.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=hpa@zytor.com \
    --cc=joao.m.martins@oracle.com \
    --cc=jroedel@suse.de \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mchehab+huawei@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=mingo@redhat.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=oneukum@suse.com \
    --cc=osalvador@suse.de \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=song.bao.hua@hisilicon.com \
    --cc=songmuchun@bytedance.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.