Linux-mm Archive on lore.kernel.org
 help / color / Atom feed
From: "Li Xinhai" <lixinhai.lxh@gmail.com>
To: mhocko <mhocko@suse.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	 akpm <akpm@linux-foundation.org>,
	 "Mike Kravetz" <mike.kravetz@oracle.com>
Subject: Re: [PATCH v4] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable
Date: Thu, 16 Jan 2020 21:50:34 +0800
Message-ID: <20200116215032206994102@gmail.com> (raw)
In-Reply-To: <20200116095614.GO19428@dhcp22.suse.cz>

On 2020-01-16 at 17:56 Michal Hocko wrote:
>On Thu 16-01-20 04:11:25, Li Xinhai wrote:
>> Checking hstate at early phase when isolating page, instead of during
>> unmap and move phase, to avoid useless isolation.
>
>Could you be more specific what you mean by isolation and why does it
>matter? The patch description should really explain _why_ the change is
>needed or desirable. 

The changelog can be improved:

vma_migratable() is called to check if pages in vma can be migrated
before go ahead to isolate, unmap and move pages. For hugetlb pages,
hugepage_migration_supported(struct hstate *h) is one factor which
decide if migration is supported. In current code, this function is called
from unmap_and_move_huge_page(), after isolating page has
completed.
This patch checks hstate from vma_migratable() and avoids isolating pages
which are not supported.

>
>> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>> ---
>>  include/linux/hugetlb.h   | 10 ++++++++++
>>  include/linux/mempolicy.h | 29 +----------------------------
>>  mm/mempolicy.c            | 28 ++++++++++++++++++++++++++++
>>  3 files changed, 39 insertions(+), 28 deletions(-)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index 31d4920..c9d871d 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -598,6 +598,11 @@ static inline bool hugepage_migration_supported(struct hstate *h)
>>  return arch_hugetlb_migration_supported(h);
>>  }
>> 
>> +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma)
>> +{
>> +	return hugepage_migration_supported(hstate_vma(vma));
>> +}
>> +
>>  /*
>>   * Movability check is different as compared to migration check.
>>   * It determines whether or not a huge page should be placed on
>> @@ -809,6 +814,11 @@ static inline bool hugepage_migration_supported(struct hstate *h)
>>  return false;
>>  }
>> 
>> +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma)
>> +{
>> +	return false;
>> +}
>> +
>>  static inline bool hugepage_movable_supported(struct hstate *h)
>>  {
>>  return false;
>> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
>> index 5228c62..8165278 100644
>> --- a/include/linux/mempolicy.h
>> +++ b/include/linux/mempolicy.h
>> @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
>>  extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol);
>> 
>>  /* Check if a vma is migratable */
>> -static inline bool vma_migratable(struct vm_area_struct *vma)
>> -{
>> -	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
>> -	return false;
>> -
>> -	/*
>> -	* DAX device mappings require predictable access latency, so avoid
>> -	* incurring periodic faults.
>> -	*/
>> -	if (vma_is_dax(vma))
>> -	return false;
>> -
>> -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
>> -	if (vma->vm_flags & VM_HUGETLB)
>> -	return false;
>> -#endif
>> -
>> -	/*
>> -	* Migration allocates pages in the highest zone. If we cannot
>> -	* do so then migration (at least from node to node) is not
>> -	* possible.
>> -	*/
>> -	if (vma->vm_file &&
>> -	gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping))
>> -	< policy_zone)
>> -	return false;
>> -	return true;
>> -}
>> +extern bool vma_migratable(struct vm_area_struct *vma);
>> 
>>  extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long);
>>  extern void mpol_put_task_policy(struct task_struct *);
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index 067cf7d..8a01fb1 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *policy,
>> 
>>  #endif /* CONFIG_COMPAT */
>> 
>> +bool vma_migratable(struct vm_area_struct *vma)
>> +{
>> +	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
>> +	return false;
>> +
>> +	/*
>> +	* DAX device mappings require predictable access latency, so avoid
>> +	* incurring periodic faults.
>> +	*/
>> +	if (vma_is_dax(vma))
>> +	return false;
>> +
>> +	if (is_vm_hugetlb_page(vma) &&
>> +	!vm_hugepage_migration_supported(vma))
>> +	return false;
>> +
>> +	/*
>> +	* Migration allocates pages in the highest zone. If we cannot
>> +	* do so then migration (at least from node to node) is not
>> +	* possible.
>> +	*/
>> +	if (vma->vm_file &&
>> +	gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping))
>> +	< policy_zone)
>> +	return false;
>> +	return true;
>> +}
>> +
>>  struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
>>  unsigned long addr)
>>  {
>> --
>> 1.8.3.1
>>
>
>--
>Michal Hocko
>SUSE Labs

  reply index

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-16  4:11 Li Xinhai
2020-01-16  9:56 ` Michal Hocko
2020-01-16 13:50   ` Li Xinhai [this message]
2020-01-16 15:18     ` Michal Hocko
2020-01-16 15:38       ` Li Xinhai
2020-01-17  3:16         ` Li Xinhai
2020-01-18  3:11           ` Li Xinhai
2020-01-18 15:27             ` Li Xinhai
2020-01-20 10:12             ` Michal Hocko
2020-01-20 15:37               ` Li Xinhai
2020-01-20 16:05                 ` Michal Hocko
2020-01-21  3:42                   ` Anshuman Khandual
2020-01-21 13:08                     ` Li Xinhai
2020-01-21 12:44                   ` Li Xinhai
2020-01-20  9:21       ` Anshuman Khandual
2020-01-20 11:32         ` Michal Hocko
2020-01-21  3:22           ` Anshuman Khandual
2020-01-20 14:19         ` Li Xinhai
2020-01-22  6:05 ` Anshuman Khandual
2020-01-22 13:21   ` Li Xinhai
2020-01-23  7:48     ` Anshuman Khandual

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200116215032206994102@gmail.com \
    --to=lixinhai.lxh@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-mm Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \
		linux-mm@kvack.org
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kvack.linux-mm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git