All of lore.kernel.org
 help / color / mirror / Atom feed
* linux-next: manual merge of the mm-stable tree with the block tree
@ 2022-11-10  6:22 Stephen Rothwell
  0 siblings, 0 replies; only message in thread
From: Stephen Rothwell @ 2022-11-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, Jens Axboe
  Cc: Linux Kernel Mailing List, Linux Next Mailing List,
	Logan Gunthorpe, Mike Kravetz

[-- Attachment #1: Type: text/plain, Size: 2911 bytes --]

Hi all,

Today's linux-next merge of the mm-stable tree got a conflict in:

  mm/hugetlb.c

between commit:

  0f0892356fa1 ("mm: allow multiple error returns in try_grab_page()")

from the block tree and commit:

  57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")

from the mm-stable tree.

I fixed it up (I think - see below) and can carry the fix as necessary.
This is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc mm/hugetlb.c
index 3373d24e4a97,fdb36afea2b2..000000000000
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@@ -6222,6 -6199,62 +6212,62 @@@ static inline bool __follow_hugetlb_mus
  	return false;
  }
  
+ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
+ 				unsigned long address, unsigned int flags)
+ {
+ 	struct hstate *h = hstate_vma(vma);
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	unsigned long haddr = address & huge_page_mask(h);
+ 	struct page *page = NULL;
+ 	spinlock_t *ptl;
+ 	pte_t *pte, entry;
+ 
+ 	/*
+ 	 * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
+ 	 * follow_hugetlb_page().
+ 	 */
+ 	if (WARN_ON_ONCE(flags & FOLL_PIN))
+ 		return NULL;
+ 
+ retry:
+ 	pte = huge_pte_offset(mm, haddr, huge_page_size(h));
+ 	if (!pte)
+ 		return NULL;
+ 
+ 	ptl = huge_pte_lock(h, mm, pte);
+ 	entry = huge_ptep_get(pte);
+ 	if (pte_present(entry)) {
+ 		page = pte_page(entry) +
+ 				((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ 		/*
+ 		 * Note that page may be a sub-page, and with vmemmap
+ 		 * optimizations the page struct may be read only.
+ 		 * try_grab_page() will increase the ref count on the
+ 		 * head page, so this will be OK.
+ 		 *
 -		 * try_grab_page() should always succeed here, because we hold
 -		 * the ptl lock and have verified pte_present().
++		 * try_grab_page() should always be able to get the page here,
++		 * because we hold the ptl lock and have verified pte_present().
+ 		 */
 -		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
++		if (try_grab_page(page, flags)) {
+ 			page = NULL;
+ 			goto out;
+ 		}
+ 	} else {
+ 		if (is_hugetlb_entry_migration(entry)) {
+ 			spin_unlock(ptl);
+ 			__migration_entry_wait_huge(pte, ptl);
+ 			goto retry;
+ 		}
+ 		/*
+ 		 * hwpoisoned entry is treated as no_page_table in
+ 		 * follow_page_mask().
+ 		 */
+ 	}
+ out:
+ 	spin_unlock(ptl);
+ 	return page;
+ }
+ 
  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
  			 struct page **pages, struct vm_area_struct **vmas,
  			 unsigned long *position, unsigned long *nr_pages,

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-11-10  6:23 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-10  6:22 linux-next: manual merge of the mm-stable tree with the block tree Stephen Rothwell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.