All of lore.kernel.org
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>,
	David Hildenbrand <david@redhat.com>,
	Michal Hocko <mhocko@suse.com>, Peter Xu <peterx@redhat.com>,
	Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	"Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Prakash Sangappa <prakash.sangappa@oracle.com>,
	James Houghton <jthoughton@google.com>,
	Mina Almasry <almasrymina@google.com>,
	Pasha Tatashin <pasha.tatashin@soleen.com>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Ray Fucillo <Ray.Fucillo@intersystems.com>,
	Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 4/8] hugetlb: handle truncate racing with page faults
Date: Sat, 27 Aug 2022 16:02:41 +0800	[thread overview]
Message-ID: <b26db6af-ffde-788d-2148-2e0992f96229@huawei.com> (raw)
In-Reply-To: <20220824175757.20590-5-mike.kravetz@oracle.com>

On 2022/8/25 1:57, Mike Kravetz wrote:
> When page fault code needs to allocate and instantiate a new hugetlb
> page (huegtlb_no_page), it checks early to determine if the fault is
> beyond i_size.  When discovered early, it is easy to abort the fault and
> return an error.  However, it becomes much more difficult to handle when
> discovered later after allocating the page and consuming reservations
> and adding to the page cache.  Backing out changes in such instances
> becomes difficult and error prone.
> 
> Instead of trying to catch and backout all such races, use the hugetlb
> fault mutex to handle truncate racing with page faults.  The most
> significant change is modification of the routine remove_inode_hugepages
> such that it will take the fault mutex for EVERY index in the truncated
> range (or hole in the case of hole punch).  Since remove_inode_hugepages
> is called in the truncate path after updating i_size, we can experience
> races as follows.
> - truncate code updates i_size and takes fault mutex before a racing
>   fault.  After fault code takes mutex, it will notice fault beyond
>   i_size and abort early.
> - fault code obtains mutex, and truncate updates i_size after early
>   checks in fault code.  fault code will add page beyond i_size.
>   When truncate code takes mutex for page/index, it will remove the
>   page.
> - truncate updates i_size, but fault code obtains mutex first.  If
>   fault code sees updated i_size it will abort early.  If fault code
>   does not see updated i_size, it will add page beyond i_size and
>   truncate code will remove page when it obtains fault mutex.
> 
> Note, for performance reasons remove_inode_hugepages will still use
> filemap_get_folios for bulk folio lookups.  For indicies not returned in
> the bulk lookup, it will need to lookup individual folios to check for
> races with page fault.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  fs/hugetlbfs/inode.c | 184 +++++++++++++++++++++++++++++++------------
>  mm/hugetlb.c         |  41 +++++-----
>  2 files changed, 152 insertions(+), 73 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index d98c6edbd1a4..e83fd31671b3 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -411,6 +411,95 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
>  	}
>  }
>  
> +/*
> + * Called with hugetlb fault mutex held.
> + * Returns true if page was actually removed, false otherwise.
> + */
> +static bool remove_inode_single_folio(struct hstate *h, struct inode *inode,
> +					struct address_space *mapping,
> +					struct folio *folio, pgoff_t index,
> +					bool truncate_op)
> +{
> +	bool ret = false;
> +
> +	/*
> +	 * If folio is mapped, it was faulted in after being
> +	 * unmapped in caller.  Unmap (again) while holding
> +	 * the fault mutex.  The mutex will prevent faults
> +	 * until we finish removing the folio.
> +	 */
> +	if (unlikely(folio_mapped(folio))) {
> +		i_mmap_lock_write(mapping);
> +		hugetlb_vmdelete_list(&mapping->i_mmap,
> +					index * pages_per_huge_page(h),
> +					(index + 1) * pages_per_huge_page(h),
> +					ZAP_FLAG_DROP_MARKER);
> +		i_mmap_unlock_write(mapping);
> +	}
> +
> +	folio_lock(folio);
> +	/*
> +	 * After locking page, make sure mapping is the same.
> +	 * We could have raced with page fault populate and
> +	 * backout code.
> +	 */
> +	if (folio_mapping(folio) == mapping) {

Could you explain this more? IIUC, page fault won't remove the hugetlb page from page
cache anymore. So this check is unneeded? Or we should always check this in case future
code changing?

> +		/*
> +		 * We must remove the folio from page cache before removing
> +		 * the region/ reserve map (hugetlb_unreserve_pages).  In
> +		 * rare out of memory conditions, removal of the region/reserve
> +		 * map could fail.  Correspondingly, the subpool and global
> +		 * reserve usage count can need to be adjusted.
> +		 */
> +		VM_BUG_ON(HPageRestoreReserve(&folio->page));
> +		hugetlb_delete_from_page_cache(&folio->page);
> +		ret = true;
> +		if (!truncate_op) {
> +			if (unlikely(hugetlb_unreserve_pages(inode, index,
> +								index + 1, 1)))
> +				hugetlb_fix_reserve_counts(inode);
> +		}
> +	}
> +
> +	folio_unlock(folio);
> +	return ret;
> +}

<snip>
> @@ -5584,9 +5585,13 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  		clear_huge_page(page, address, pages_per_huge_page(h));
>  		__SetPageUptodate(page);
>  		new_page = true;
> +		if (HPageRestoreReserve(page))
> +			reserve_alloc = true;
>  
>  		if (vma->vm_flags & VM_MAYSHARE) {
> -			int err = hugetlb_add_to_page_cache(page, mapping, idx);
> +			int err;
> +
> +			err = hugetlb_add_to_page_cache(page, mapping, idx);
>  			if (err) {
>  				restore_reserve_on_error(h, vma, haddr, page);
>  				put_page(page);
> @@ -5642,10 +5647,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  	}
>  
>  	ptl = huge_pte_lock(h, mm, ptep);
> -	size = i_size_read(mapping->host) >> huge_page_shift(h);
> -	if (idx >= size)
> -		goto backout;
> -
>  	ret = 0;
>  	/* If pte changed from under us, retry */
>  	if (!pte_same(huge_ptep_get(ptep), old_pte))
> @@ -5689,10 +5690,18 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  backout:
>  	spin_unlock(ptl);
>  backout_unlocked:
> -	unlock_page(page);
> -	/* restore reserve for newly allocated pages not in page cache */
> -	if (new_page && !new_pagecache_page)
> +	if (new_page && !new_pagecache_page) {
> +		/*
> +		 * If reserve was consumed, make sure flag is set so that it
> +		 * will be restored in free_huge_page().
> +		 */
> +		if (reserve_alloc)
> +			SetHPageRestoreReserve(page);

If code reaches here, it should be a newly allocated page and it's not added to the hugetlb page cache.
Note that failing to add the page to hugetlb page cache should have returned already. So the page must be
anon? If so, HPageRestoreReserve isn't cleared yet as it's cleared right before set_huge_pte. Thus above
check can be removed?

Anyway, the patch looks good to me.

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

Thanks,
Miaohe Lin


  parent reply	other threads:[~2022-08-27  8:02 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-24 17:57 [PATCH 0/8] hugetlb: Use new vma mutex for huge pmd sharing synchronization Mike Kravetz
2022-08-24 17:57 ` [PATCH 1/8] hugetlbfs: revert use i_mmap_rwsem to address page fault/truncate race Mike Kravetz
2022-08-27  2:50   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 2/8] hugetlbfs: revert use i_mmap_rwsem for more pmd sharing synchronization Mike Kravetz
2022-08-27  2:59   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 3/8] hugetlb: rename remove_huge_page to hugetlb_delete_from_page_cache Mike Kravetz
2022-08-27  3:08   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 4/8] hugetlb: handle truncate racing with page faults Mike Kravetz
2022-08-25 17:00   ` Mike Kravetz
2022-08-27  8:02   ` Miaohe Lin [this message]
2022-08-29 21:53     ` Mike Kravetz
2022-09-06 13:57   ` Sven Schnelle
2022-09-06 16:48     ` Mike Kravetz
2022-09-06 18:05       ` Mike Kravetz
2022-09-06 23:08         ` Mike Kravetz
2022-09-07  2:11           ` Miaohe Lin
2022-09-07  2:37             ` Mike Kravetz
2022-09-07  3:07               ` Miaohe Lin
2022-09-07  3:30                 ` Mike Kravetz
2022-09-07  8:22                   ` Sven Schnelle
2022-09-07 14:50                     ` Mike Kravetz
2022-08-24 17:57 ` [PATCH 5/8] hugetlb: rename vma_shareable() and refactor code Mike Kravetz
2022-08-27  8:07   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 6/8] hugetlb: add vma based lock for pmd sharing Mike Kravetz
2022-08-27  9:30   ` Miaohe Lin
2022-08-29 22:24     ` Mike Kravetz
2022-08-30  2:34       ` Miaohe Lin
2022-09-07 20:50       ` Mike Kravetz
2022-09-08  2:04         ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 7/8] hugetlb: create hugetlb_unmap_file_folio to unmap single file folio Mike Kravetz
2022-08-29  2:44   ` Miaohe Lin
2022-08-29 22:37     ` Mike Kravetz
2022-08-30  2:46       ` Miaohe Lin
2022-09-02 21:35         ` Mike Kravetz
2022-09-05  2:32           ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 8/8] hugetlb: use new vma_lock for pmd sharing synchronization Mike Kravetz
2022-08-30  2:02   ` Miaohe Lin
2022-09-02 23:07     ` Mike Kravetz
2022-09-05  3:08       ` Miaohe Lin
2022-09-12 23:02         ` Mike Kravetz
2022-09-13  2:14           ` Miaohe Lin
2022-09-14  0:50             ` Mike Kravetz
2022-09-14  2:08               ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b26db6af-ffde-788d-2148-2e0992f96229@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=Ray.Fucillo@intersystems.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=axelrasmussen@google.com \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=jthoughton@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=pasha.tatashin@soleen.com \
    --cc=peterx@redhat.com \
    --cc=prakash.sangappa@oracle.com \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.