All of lore.kernel.org
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: <akpm@linux-foundation.org>, <mike.kravetz@oracle.com>,
	<shy828301@gmail.com>, <willy@infradead.org>, <ziy@nvidia.com>,
	<minchan@kernel.org>, <apopple@nvidia.com>,
	<o451686892@gmail.com>, <almasrymina@google.com>,
	<jhubbard@nvidia.com>, <rcampbell@nvidia.com>,
	<peterx@redhat.com>, <naoya.horiguchi@nec.com>, <mhocko@suse.com>,
	<riel@redhat.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 16/16] mm/migration: fix potential pte_unmap on an not mapped pte
Date: Wed, 9 Mar 2022 16:48:39 +0800	[thread overview]
Message-ID: <56baf172-4a0f-676e-86ef-61e2fef87520@huawei.com> (raw)
In-Reply-To: <877d94gczc.fsf@yhuang6-desk2.ccr.corp.intel.com>

On 2022/3/9 8:56, Huang, Ying wrote:
> Miaohe Lin <linmiaohe@huawei.com> writes:
> 
>> On 2022/3/7 13:37, Huang, Ying wrote:
>>> Miaohe Lin <linmiaohe@huawei.com> writes:
>>>
>>>> __migration_entry_wait and migration_entry_wait_on_locked assume pte is
>>>> always mapped from caller. But this is not the case when it's called from
>>>> migration_entry_wait_huge and follow_huge_pmd. And a parameter unmap to
>>>> indicate whether pte needs to be unmapped to fix this issue.
>>>
>>> This seems a possible issue.
>>>
>>> Have you tested it?  It appears that it's possible to trigger the
>>> issue.  If so, you can paste the error log here.
>>
>> This might happen iff on x86 machine with HIGHMEM enabled which is uncommon
>> now (at least in my work environment).
> 
> Yes.  32-bit isn't popular now.  But you can always test it via virtual
> machine.
> 

Good idea.

>> So I can't paste the err log. And The
>> issues from this series mainly come from the code investigating with some tests.
> 
> If not too complex, I still think it's better to test your code and
> verify the problem.

Sure! :)
Many thanks.

> 
> Best Regards,
> Huang, Ying
> 
>> Thanks. :)
>>
>>>
>>> BTW: have you tested the other functionality issues in your patchset?
>>>
>>> Best Regards,
>>> Huang, Ying
>>>
>>>> Fixes: 30dad30922cc ("mm: migration: add migrate_entry_wait_huge()")
>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>> ---
>>>>  include/linux/migrate.h |  2 +-
>>>>  include/linux/swapops.h |  4 ++--
>>>>  mm/filemap.c            | 10 +++++-----
>>>>  mm/hugetlb.c            |  2 +-
>>>>  mm/migrate.c            | 14 ++++++++------
>>>>  5 files changed, 17 insertions(+), 15 deletions(-)
>>>>
>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>>>> index 66a34eae8cb6..3ef4ff699bef 100644
>>>> --- a/include/linux/migrate.h
>>>> +++ b/include/linux/migrate.h
>>>> @@ -41,7 +41,7 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping,
>>>>  extern int migrate_page_move_mapping(struct address_space *mapping,
>>>>  		struct page *newpage, struct page *page, int extra_count);
>>>>  void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep,
>>>> -				spinlock_t *ptl);
>>>> +				spinlock_t *ptl, bool unmap);
>>>>  void folio_migrate_flags(struct folio *newfolio, struct folio *folio);
>>>>  void folio_migrate_copy(struct folio *newfolio, struct folio *folio);
>>>>  int folio_migrate_mapping(struct address_space *mapping,
>>>> diff --git a/include/linux/swapops.h b/include/linux/swapops.h
>>>> index d356ab4047f7..d66556875d7d 100644
>>>> --- a/include/linux/swapops.h
>>>> +++ b/include/linux/swapops.h
>>>> @@ -213,7 +213,7 @@ static inline swp_entry_t make_writable_migration_entry(pgoff_t offset)
>>>>  }
>>>>  
>>>>  extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
>>>> -					spinlock_t *ptl);
>>>> +					spinlock_t *ptl, bool unmap);
>>>>  extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>>>>  					unsigned long address);
>>>>  extern void migration_entry_wait_huge(struct vm_area_struct *vma,
>>>> @@ -235,7 +235,7 @@ static inline int is_migration_entry(swp_entry_t swp)
>>>>  }
>>>>  
>>>>  static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
>>>> -					spinlock_t *ptl) { }
>>>> +					spinlock_t *ptl, bool unmap) { }
>>>>  static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>>>>  					 unsigned long address) { }
>>>>  static inline void migration_entry_wait_huge(struct vm_area_struct *vma,
>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>> index 8f7e6088ee2a..18c353d52aae 100644
>>>> --- a/mm/filemap.c
>>>> +++ b/mm/filemap.c
>>>> @@ -1389,6 +1389,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr,
>>>>   * @ptep: mapped pte pointer. Will return with the ptep unmapped. Only required
>>>>   *        for pte entries, pass NULL for pmd entries.
>>>>   * @ptl: already locked ptl. This function will drop the lock.
>>>> + * @unmap: indicating whether ptep need to be unmapped.
>>>>   *
>>>>   * Wait for a migration entry referencing the given page to be removed. This is
>>>>   * equivalent to put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE) except
>>>> @@ -1402,7 +1403,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr,
>>>>   * there.
>>>>   */
>>>>  void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep,
>>>> -				spinlock_t *ptl)
>>>> +				spinlock_t *ptl, bool unmap)
>>>>  {
>>>>  	struct wait_page_queue wait_page;
>>>>  	wait_queue_entry_t *wait = &wait_page.wait;
>>>> @@ -1439,10 +1440,9 @@ void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep,
>>>>  	 * a valid reference to the page, and it must take the ptl to remove the
>>>>  	 * migration entry. So the page is valid until the ptl is dropped.
>>>>  	 */
>>>> -	if (ptep)
>>>> -		pte_unmap_unlock(ptep, ptl);
>>>> -	else
>>>> -		spin_unlock(ptl);
>>>> +	spin_unlock(ptl);
>>>> +	if (unmap && ptep)
>>>> +		pte_unmap(ptep);
>>>>  
>>>>  	for (;;) {
>>>>  		unsigned int flags;
>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>> index 07668781c246..8088128c25db 100644
>>>> --- a/mm/hugetlb.c
>>>> +++ b/mm/hugetlb.c
>>>> @@ -6713,7 +6713,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>>>>  	} else {
>>>>  		if (is_hugetlb_entry_migration(pte)) {
>>>>  			spin_unlock(ptl);
>>>> -			__migration_entry_wait(mm, (pte_t *)pmd, ptl);
>>>> +			__migration_entry_wait(mm, (pte_t *)pmd, ptl, false);
>>>>  			goto retry;
>>>>  		}
>>>>  		/*
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 98a968e6f465..5519261f54fe 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -281,7 +281,7 @@ void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked)
>>>>   * When we return from this function the fault will be retried.
>>>>   */
>>>>  void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
>>>> -				spinlock_t *ptl)
>>>> +				spinlock_t *ptl, bool unmap)
>>>>  {
>>>>  	pte_t pte;
>>>>  	swp_entry_t entry;
>>>> @@ -295,10 +295,12 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
>>>>  	if (!is_migration_entry(entry))
>>>>  		goto out;
>>>>  
>>>> -	migration_entry_wait_on_locked(entry, ptep, ptl);
>>>> +	migration_entry_wait_on_locked(entry, ptep, ptl, unmap);
>>>>  	return;
>>>>  out:
>>>> -	pte_unmap_unlock(ptep, ptl);
>>>> +	spin_unlock(ptl);
>>>> +	if (unmap)
>>>> +		pte_unmap(ptep);
>>>>  }
>>>>  
>>>>  void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>>>> @@ -306,14 +308,14 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>>>>  {
>>>>  	spinlock_t *ptl = pte_lockptr(mm, pmd);
>>>>  	pte_t *ptep = pte_offset_map(pmd, address);
>>>> -	__migration_entry_wait(mm, ptep, ptl);
>>>> +	__migration_entry_wait(mm, ptep, ptl, true);
>>>>  }
>>>>  
>>>>  void migration_entry_wait_huge(struct vm_area_struct *vma,
>>>>  		struct mm_struct *mm, pte_t *pte)
>>>>  {
>>>>  	spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), mm, pte);
>>>> -	__migration_entry_wait(mm, pte, ptl);
>>>> +	__migration_entry_wait(mm, pte, ptl, false);
>>>>  }
>>>>  
>>>>  #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>>> @@ -324,7 +326,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd)
>>>>  	ptl = pmd_lock(mm, pmd);
>>>>  	if (!is_pmd_migration_entry(*pmd))
>>>>  		goto unlock;
>>>> -	migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl);
>>>> +	migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl, false);
>>>>  	return;
>>>>  unlock:
>>>>  	spin_unlock(ptl);
>>> .
>>>
> .
> 


  reply	other threads:[~2022-03-09  8:48 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-04  9:33 [PATCH 00/16] A few cleanup and fixup patches for migration Miaohe Lin
2022-03-04  9:33 ` [PATCH 01/16] mm/migration: remove unneeded local variable mapping_locked Miaohe Lin
2022-03-04 13:48   ` Muchun Song
2022-03-07  2:00   ` Huang, Ying
2022-03-08 11:41     ` Miaohe Lin
2022-03-04  9:33 ` [PATCH 02/16] mm/migration: remove unneeded out label Miaohe Lin
2022-03-04 12:12   ` Muchun Song
2022-03-07  2:03   ` Huang, Ying
2022-03-08 11:44     ` Miaohe Lin
2022-03-04  9:33 ` [PATCH 03/16] mm/migration: remove unneeded local variable page_lru Miaohe Lin
2022-03-07 10:58   ` Alistair Popple
2022-03-08 11:29     ` Miaohe Lin
2022-03-04  9:33 ` [PATCH 04/16] mm/migration: reduce the rcu lock duration Miaohe Lin
2022-03-04 12:16   ` Muchun Song
2022-03-07  2:32   ` Huang, Ying
2022-03-08 12:09     ` Miaohe Lin
2022-03-09  1:02       ` Huang, Ying
2022-03-09  8:28         ` Miaohe Lin
2022-03-04  9:33 ` [PATCH 05/16] mm/migration: fix the confusing PageTransHuge check Miaohe Lin
2022-03-04 12:20   ` Muchun Song
2022-03-04  9:33 ` [PATCH 06/16] mm/migration: use helper function vma_lookup() in add_page_for_migration Miaohe Lin
2022-03-04  9:34 ` [PATCH 07/16] mm/migration: use helper macro min_t in do_pages_stat Miaohe Lin
2022-03-04 13:51   ` Muchun Song
2022-03-07  1:14   ` Andrew Morton
2022-03-07 11:51     ` Miaohe Lin
2022-03-04  9:34 ` [PATCH 08/16] mm/migration: avoid unneeded nodemask_t initialization Miaohe Lin
2022-03-04 13:57   ` Muchun Song
2022-03-07  2:31   ` Baolin Wang
2022-03-04  9:34 ` [PATCH 09/16] mm/migration: remove some duplicated codes in migrate_pages Miaohe Lin
2022-03-04 15:16   ` Zi Yan
2022-03-07  1:44   ` Baolin Wang
2022-03-04  9:34 ` [PATCH 10/16] mm/migration: remove PG_writeback handle in folio_migrate_flags Miaohe Lin
2022-03-07  1:21   ` Andrew Morton
2022-03-07 12:44     ` Miaohe Lin
2022-03-04  9:34 ` [PATCH 11/16] mm/migration: remove unneeded lock page and PageMovable check Miaohe Lin
2022-03-04  9:34 ` [PATCH 12/16] mm/migration: fix potential page refcounts leak in migrate_pages Miaohe Lin
2022-03-04 15:21   ` Zi Yan
2022-03-07  1:57   ` Baolin Wang
2022-03-07  5:02     ` Huang, Ying
2022-03-07  6:00       ` Baolin Wang
2022-03-07 12:03         ` Miaohe Lin
2022-03-07 12:01     ` Miaohe Lin
2022-03-07  5:01   ` Huang, Ying
2022-03-07 12:11     ` Miaohe Lin
2022-03-04  9:34 ` [PATCH 13/16] mm/migration: return errno when isolate_huge_page failed Miaohe Lin
2022-03-05  2:23   ` Muchun Song
2022-03-07 11:46     ` Miaohe Lin
2022-03-07  2:14   ` Baolin Wang
2022-03-07 12:20     ` Miaohe Lin
2022-03-08  1:32       ` Baolin Wang
2022-03-08  6:34         ` Miaohe Lin
2022-03-07  5:07   ` Huang, Ying
2022-03-08 12:12     ` Miaohe Lin
2022-03-09  1:00       ` Huang, Ying
2022-03-09  8:29         ` Miaohe Lin
2022-03-04  9:34 ` [PATCH 14/16] mm/migration: fix potential invalid node access for reclaim-based migration Miaohe Lin
2022-03-07  2:25   ` Baolin Wang
2022-03-07  5:14     ` Huang, Ying
2022-03-07  7:04       ` Baolin Wang
2022-03-08 11:46         ` Miaohe Lin
2022-03-07  5:14   ` Huang, Ying
2022-03-04  9:34 ` [PATCH 15/16] mm/migration: fix possible do_pages_stat_array racing with memory offline Miaohe Lin
2022-03-07  5:21   ` Huang, Ying
2022-03-07  7:01     ` Muchun Song
2022-03-07  7:42       ` Huang, Ying
2022-03-08 11:33         ` Miaohe Lin
2022-03-04  9:34 ` [PATCH 16/16] mm/migration: fix potential pte_unmap on an not mapped pte Miaohe Lin
2022-03-07  5:37   ` Huang, Ying
2022-03-08 12:19     ` Miaohe Lin
2022-03-09  0:56       ` Huang, Ying
2022-03-09  8:48         ` Miaohe Lin [this message]
2022-03-07  7:35   ` Alistair Popple
2022-03-08 11:55     ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56baf172-4a0f-676e-86ef-61e2fef87520@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=apopple@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=minchan@kernel.org \
    --cc=naoya.horiguchi@nec.com \
    --cc=o451686892@gmail.com \
    --cc=peterx@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=riel@redhat.com \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.