From: David Rientjes <rientjes@google.com>
To: Muchun Song <songmuchun@bytedance.com>
Cc: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
dave.hansen@linux.intel.com, luto@kernel.org,
Peter Zijlstra <peterz@infradead.org>,
viro@zeniv.linux.org.uk,
Andrew Morton <akpm@linux-foundation.org>,
paulmck@kernel.org, mchehab+huawei@kernel.org,
pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org,
oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de,
almasrymina@google.com, Matthew Wilcox <willy@infradead.org>,
osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com,
david@redhat.com, naoya.horiguchi@nec.com,
duanxiongchun@bytedance.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages
Date: Sun, 24 Jan 2021 15:55:37 -0800 (PST) [thread overview]
Message-ID: <59d18082-248a-7014-b917-625d759c572@google.com> (raw)
In-Reply-To: <20210117151053.24600-5-songmuchun@bytedance.com>
On Sun, 17 Jan 2021, Muchun Song wrote:
> In the subsequent patch, we should allocate the vmemmap pages when
> freeing HugeTLB pages. But update_and_free_page() is always called
> with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate
> vmemmap pages. However, we can defer the actual freeing in a kworker
> to prevent from using GFP_ATOMIC to allocate the vmemmap pages.
>
> The update_hpage_vmemmap_workfn() is where the call to allocate
> vmemmmap pages will be inserted.
>
I think it's reasonable to assume that userspace can release free hugetlb
pages from the pool on oom conditions when reclaim has become too
expensive. This approach now requires that we can allocate vmemmap pages
in a potential oom condition as a prerequisite for freeing memory, which
seems less than ideal.
And, by doing this through a kworker, we can presumably get queued behind
another work item that requires memory to make forward progress in this
oom condition.
Two thoughts:
- We're going to be freeing the hugetlb page after we can allocate the
vmemmap pages, so why do we need to allocate with GFP_KERNEL? Can't we
simply dip into memory reserves using GFP_ATOMIC (and thus can be
holding hugetlb_lock) because we know we'll be freeing more memory than
we'll be allocating? I think requiring a GFP_KERNEL allocation to block
to free memory for vmemmap when we'll be freeing memory ourselves is
dubious. This simplifies all of this.
- If the answer is that we actually have to use GFP_KERNEL for other
reasons, what are your thoughts on pre-allocating the vmemmap as opposed
to deferring to a kworker? In other words, preallocate the necessary
memory with GFP_KERNEL and put it on a linked list in struct hstate
before acquiring hugetlb_lock.
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
> mm/hugetlb.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++--
> mm/hugetlb_vmemmap.c | 12 ---------
> mm/hugetlb_vmemmap.h | 17 ++++++++++++
> 3 files changed, 89 insertions(+), 14 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 140135fc8113..c165186ec2cf 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1292,15 +1292,85 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> unsigned int order) { }
> #endif
>
> -static void update_and_free_page(struct hstate *h, struct page *page)
> +static void __free_hugepage(struct hstate *h, struct page *page);
> +
> +/*
> + * As update_and_free_page() is always called with holding hugetlb_lock, so we
> + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the
> + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate
> + * the vmemmap pages.
> + *
> + * The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap
> + * pages will be inserted.
> + *
> + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages
> + * to be freed and frees them one-by-one. As the page->mapping pointer is going
> + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the
> + * llist_node structure of a lockless linked list of huge pages to be freed.
> + */
> +static LLIST_HEAD(hpage_update_freelist);
> +
> +static void update_hpage_vmemmap_workfn(struct work_struct *work)
> {
> - int i;
> + struct llist_node *node;
> +
> + node = llist_del_all(&hpage_update_freelist);
> +
> + while (node) {
> + struct page *page;
> + struct hstate *h;
> +
> + page = container_of((struct address_space **)node,
> + struct page, mapping);
> + node = node->next;
> + page->mapping = NULL;
> + h = page_hstate(page);
> +
> + spin_lock(&hugetlb_lock);
> + __free_hugepage(h, page);
> + spin_unlock(&hugetlb_lock);
>
> + cond_resched();
Wouldn't it be better to hold hugetlb_lock for the iteration rather than
constantly dropping it and reacquiring it? Use
cond_resched_lock(&hugetlb_lock) instead?
next prev parent reply other threads:[~2021-01-24 23:57 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-17 15:10 [PATCH v13 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
2021-01-17 15:10 ` [PATCH v13 01/12] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
2021-01-25 2:48 ` Miaohe Lin
2021-01-17 15:10 ` [PATCH v13 02/12] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2021-01-24 23:58 ` David Rientjes
2021-01-25 3:16 ` Randy Dunlap
2021-01-25 4:06 ` [External] " Muchun Song
2021-01-25 4:08 ` Randy Dunlap
2021-01-25 5:06 ` Muchun Song
2021-01-25 18:47 ` David Rientjes
2021-01-26 2:45 ` Muchun Song
2021-01-26 20:13 ` David Rientjes
2021-01-26 2:07 ` Miaohe Lin
2021-01-17 15:10 ` [PATCH v13 03/12] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
2021-01-17 15:29 ` Muchun Song
2021-01-23 0:59 ` Mike Kravetz
2021-01-23 3:22 ` [External] " Muchun Song
2021-01-23 17:52 ` Oscar Salvador
2021-01-24 6:48 ` [External] " Muchun Song
2021-01-17 15:10 ` [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages Muchun Song
2021-01-24 23:55 ` David Rientjes [this message]
2021-01-25 3:58 ` [External] " Muchun Song
2021-01-17 15:10 ` [PATCH v13 05/12] mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB page Muchun Song
2021-01-25 0:05 ` David Rientjes
2021-01-25 6:40 ` [External] " Muchun Song
2021-01-25 7:41 ` Muchun Song
2021-01-25 9:15 ` David Hildenbrand
2021-01-25 9:34 ` Muchun Song
2021-01-25 23:25 ` Mike Kravetz
2021-01-26 7:48 ` Oscar Salvador
2021-01-26 9:29 ` Oscar Salvador
2021-01-26 9:36 ` David Hildenbrand
2021-01-26 14:58 ` Oscar Salvador
2021-01-26 15:10 ` David Hildenbrand
2021-01-26 15:34 ` Oscar Salvador
2021-01-26 15:56 ` David Hildenbrand
2021-01-27 10:36 ` David Hildenbrand
2021-01-28 12:37 ` [External] " Muchun Song
2021-01-28 13:08 ` Muchun Song
2021-01-29 1:04 ` Mike Kravetz
2021-01-29 6:56 ` Muchun Song
2021-02-01 16:10 ` David Hildenbrand
2021-02-02 0:05 ` Mike Kravetz
2021-01-28 22:29 ` Oscar Salvador
2021-01-29 6:16 ` [External] " Muchun Song
2021-02-01 15:50 ` David Hildenbrand
2021-01-17 15:10 ` [PATCH v13 06/12] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
2021-01-25 0:06 ` David Rientjes
2021-01-25 5:06 ` [External] " Muchun Song
2021-01-17 15:10 ` [PATCH v13 07/12] mm: hugetlb: flush work when dissolving a HugeTLB page Muchun Song
2021-01-17 15:10 ` [PATCH v13 08/12] mm: hugetlb: introduce PageHugeInflight Muchun Song
2021-01-17 15:10 ` [PATCH v13 09/12] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
2021-01-25 11:43 ` David Hildenbrand
2021-01-25 12:08 ` Oscar Salvador
2021-01-25 12:31 ` [External] " Muchun Song
2021-01-25 12:30 ` Muchun Song
2021-01-17 15:10 ` [PATCH v13 10/12] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2021-01-17 15:10 ` [PATCH v13 11/12] mm: hugetlb: gather discrete indexes of tail page Muchun Song
2021-01-17 15:10 ` [PATCH v13 12/12] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
2021-01-20 12:52 ` [PATCH v13 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
2021-01-20 13:10 ` Oscar Salvador
2021-01-20 14:22 ` [External] " Muchun Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=59d18082-248a-7014-b917-625d759c572@google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=anshuman.khandual@arm.com \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=duanxiongchun@bytedance.com \
--cc=hpa@zytor.com \
--cc=jroedel@suse.de \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mchehab+huawei@kernel.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=mingo@redhat.com \
--cc=naoya.horiguchi@nec.com \
--cc=oneukum@suse.com \
--cc=osalvador@suse.de \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=peterz@infradead.org \
--cc=rdunlap@infradead.org \
--cc=song.bao.hua@hisilicon.com \
--cc=songmuchun@bytedance.com \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).