linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: js1304@gmail.com, Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kernel-team@lge.com, Vlastimil Babka <vbabka@suse.cz>,
	Christoph Hellwig <hch@infradead.org>,
	Roman Gushchin <guro@fb.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Michal Hocko <mhocko@suse.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: Re: [PATCH 04/11] mm/hugetlb: unify hugetlb migration callback function
Date: Thu, 21 May 2020 13:54:21 -0700	[thread overview]
Message-ID: <01cdaaea-892a-2b29-edbd-279611f418b0@oracle.com> (raw)
In-Reply-To: <1589764857-6800-5-git-send-email-iamjoonsoo.kim@lge.com>

On 5/17/20 6:20 PM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> There is no difference between two migration callback functions,
> alloc_huge_page_node() and alloc_huge_page_nodemask(), except
> __GFP_THISNODE handling. This patch adds one more field on to
> the alloc_control and handles this exception.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/hugetlb.h |  8 --------
>  mm/hugetlb.c            | 23 ++---------------------
>  mm/internal.h           |  1 +
>  mm/mempolicy.c          |  3 ++-
>  4 files changed, 5 insertions(+), 30 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 6da217e..4892ed3 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -505,8 +505,6 @@ struct huge_bootmem_page {
>  
>  struct page *alloc_migrate_huge_page(struct hstate *h,
>  				struct alloc_control *ac);
> -struct page *alloc_huge_page_node(struct hstate *h,
> -				struct alloc_control *ac);
>  struct page *alloc_huge_page_nodemask(struct hstate *h,
>  				struct alloc_control *ac);
>  struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
> @@ -755,12 +753,6 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
>  struct hstate {};
>  
>  static inline struct page *
> -alloc_huge_page_node(struct hstate *h, struct alloc_control *ac)
> -{
> -	return NULL;
> -}
> -
> -static inline struct page *
>  alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac)
>  {
>  	return NULL;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 859dba4..60b0983 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1981,31 +1981,12 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
>  }
>  
>  /* page migration callback function */
> -struct page *alloc_huge_page_node(struct hstate *h,
> -				struct alloc_control *ac)
> -{
> -	struct page *page = NULL;
> -
> -	ac->gfp_mask |= htlb_alloc_mask(h);
> -	if (ac->nid != NUMA_NO_NODE)
> -		ac->gfp_mask |= __GFP_THISNODE;
> -
> -	spin_lock(&hugetlb_lock);
> -	if (h->free_huge_pages - h->resv_huge_pages > 0)
> -		page = dequeue_huge_page_nodemask(h, ac);
> -	spin_unlock(&hugetlb_lock);
> -
> -	if (!page)
> -		page = alloc_migrate_huge_page(h, ac);
> -
> -	return page;
> -}
> -
> -/* page migration callback function */
>  struct page *alloc_huge_page_nodemask(struct hstate *h,
>  				struct alloc_control *ac)
>  {
>  	ac->gfp_mask |= htlb_alloc_mask(h);
> +	if (ac->thisnode && ac->nid != NUMA_NO_NODE)
> +		ac->gfp_mask |= __GFP_THISNODE;
>  
>  	spin_lock(&hugetlb_lock);
>  	if (h->free_huge_pages - h->resv_huge_pages > 0) {
> diff --git a/mm/internal.h b/mm/internal.h
> index 75b3f8e..574722d0 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -618,6 +618,7 @@ struct alloc_control {
>  	int nid;
>  	nodemask_t *nmask;
>  	gfp_t gfp_mask;
> +	bool thisnode;

I wonder if the new field is necessary?

IIUC, it simply prevents the check for NUMA_NO_NODE and possible setting
of __GFP_THISNODE in previous alloc_huge_page_nodemask() calling sequences.
However, it appears that node (preferred_nid) is always set to something
other than NUMA_NO_NODE in those callers.

It obviously makes sense to add the field to guarantee no changes to
functionality while making the conversions.  However, it it is not really
necessary then it may cause confusion later.

-- 
Mike Kravetz

>  };
>  
>  #endif	/* __MM_INTERNAL_H */
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 06f60a5..629feaa 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1073,9 +1073,10 @@ struct page *alloc_new_node_page(struct page *page, unsigned long node)
>  		struct alloc_control ac = {
>  			.nid = node,
>  			.nmask = NULL,
> +			.thisnode = true,
>  		};
>  
> -		return alloc_huge_page_node(h, &ac);
> +		return alloc_huge_page_nodemask(h, &ac);
>  	} else if (PageTransHuge(page)) {
>  		struct page *thp;
>  
> 

  parent reply	other threads:[~2020-05-21 20:54 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-18  1:20 [PATCH 00/11] clean-up the migration target allocation functions js1304
2020-05-18  1:20 ` [PATCH 01/11] mm/page_isolation: prefer the node of the source page js1304
2020-05-21  0:37   ` Roman Gushchin
2020-05-21  1:18     ` Joonsoo Kim
2020-05-27  6:45       ` Joonsoo Kim
2020-05-18  1:20 ` [PATCH 02/11] mm/migrate: move migration helper from .h to .c js1304
2020-05-21 18:17   ` Mike Kravetz
2020-05-18  1:20 ` [PATCH 03/11] mm/hugetlb: introduce alloc_control structure to simplify migration target allocation APIs js1304
2020-05-21  0:43   ` Roman Gushchin
2020-05-21  1:20     ` Joonsoo Kim
2020-05-21 18:57   ` Mike Kravetz
2020-05-22  5:52     ` Joonsoo Kim
2020-05-18  1:20 ` [PATCH 04/11] mm/hugetlb: unify hugetlb migration callback function js1304
2020-05-21  0:46   ` Roman Gushchin
2020-05-21  1:22     ` Joonsoo Kim
2020-05-21 20:54   ` Mike Kravetz [this message]
2020-05-22  7:38     ` Joonsoo Kim
2020-05-18  1:20 ` [PATCH 05/11] mm/hugetlb: make hugetlb migration target allocation APIs CMA aware js1304
2020-05-21 21:43   ` Mike Kravetz
2020-05-18  1:20 ` [PATCH 06/11] mm/hugetlb: do not modify user provided gfp_mask js1304
2020-05-21 22:19   ` Mike Kravetz
2020-05-22  7:42     ` Joonsoo Kim
2020-05-18  1:20 ` [PATCH 07/11] mm/migrate: change the interface of the migration target alloc/free functions js1304
2020-05-18  1:20 ` [PATCH 08/11] mm/migrate: make standard migration target allocation functions js1304
2020-05-18  1:20 ` [PATCH 09/11] mm/gup: use standard migration target allocation function js1304
2020-05-18  1:20 ` [PATCH 10/11] mm/mempolicy: " js1304
2020-05-18  1:20 ` [PATCH 11/11] mm/page_alloc: use standard migration target allocation function directly js1304

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=01cdaaea-892a-2b29-edbd-279611f418b0@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=hch@infradead.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=js1304@gmail.com \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).