linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] powerpc: convert put_page() to put_user_page*()
@ 2019-08-05  2:38 john.hubbard
  2019-08-07 23:24 ` kbuild test robot
  0 siblings, 1 reply; 3+ messages in thread
From: john.hubbard @ 2019-08-05  2:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Jan Kara, linux-mm, John Hubbard, linuxppc-dev, LKML,
	Christoph Hellwig, Jason Gunthorpe, Jerome Glisse, linux-fsdevel,
	Ira Weiny, Christoph Hellwig

From: John Hubbard <jhubbard@nvidia.com>

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
mm_iommu_unpin(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.GB720@lst.de

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c    |  4 ++--
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 19 ++++++++++++++-----
 arch/powerpc/kvm/e500_mmu.c            |  3 +--
 arch/powerpc/mm/book3s64/iommu_api.c   | 11 +++++------
 4 files changed, 22 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 9a75f0e1933b..18646b738ce1 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -731,7 +731,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		 * we have to drop the reference on the correct tail
 		 * page to match the get inside gup()
 		 */
-		put_page(pages[0]);
+		put_user_page(pages[0]);
 	}
 	return ret;
 
@@ -1206,7 +1206,7 @@ void kvmppc_unpin_guest_page(struct kvm *kvm, void *va, unsigned long gpa,
 	unsigned long gfn;
 	int srcu_idx;
 
-	put_page(page);
+	put_user_page(page);
 
 	if (!dirty)
 		return;
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 2d415c36a61d..f53273fbfa2d 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -821,8 +821,12 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	 */
 	if (!ptep) {
 		local_irq_enable();
-		if (page)
-			put_page(page);
+		if (page) {
+			if (upgrade_write)
+				put_user_page(page);
+			else
+				put_page(page);
+		}
 		return RESUME_GUEST;
 	}
 	pte = *ptep;
@@ -870,9 +874,14 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 		*levelp = level;
 
 	if (page) {
-		if (!ret && (pte_val(pte) & _PAGE_WRITE))
-			set_page_dirty_lock(page);
-		put_page(page);
+		bool dirty = !ret && (pte_val(pte) & _PAGE_WRITE);
+		if (upgrade_write)
+			put_user_pages_dirty_lock(&page, 1, dirty);
+		else {
+			if (dirty)
+				set_page_dirty_lock(page);
+			put_page(page);
+		}
 	}
 
 	/* Increment number of large pages if we (successfully) inserted one */
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index 2d910b87e441..67bb8d59d4b1 100644
--- a/arch/powerpc/kvm/e500_mmu.c
+++ b/arch/powerpc/kvm/e500_mmu.c
@@ -850,8 +850,7 @@ int kvm_vcpu_ioctl_config_tlb(struct kvm_vcpu *vcpu,
  free_privs_first:
 	kfree(privs[0]);
  put_pages:
-	for (i = 0; i < num_pages; i++)
-		put_page(pages[i]);
+	put_user_pages(pages, num_pages);
  free_pages:
 	kfree(pages);
 	return ret;
diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c
index b056cae3388b..e126193ba295 100644
--- a/arch/powerpc/mm/book3s64/iommu_api.c
+++ b/arch/powerpc/mm/book3s64/iommu_api.c
@@ -170,9 +170,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
 	return 0;
 
 free_exit:
-	/* free the reference taken */
-	for (i = 0; i < pinned; i++)
-		put_page(mem->hpages[i]);
+	/* free the references taken */
+	put_user_pages(mem->hpages, pinned);
 
 	vfree(mem->hpas);
 	kfree(mem);
@@ -203,6 +202,7 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
 {
 	long i;
 	struct page *page = NULL;
+	bool dirty = false;
 
 	if (!mem->hpas)
 		return;
@@ -215,10 +215,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
 		if (!page)
 			continue;
 
-		if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
-			SetPageDirty(page);
+		dirty = mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;
 
-		put_page(page);
+		put_user_pages_dirty_lock(&page, 1, dirty);
 		mem->hpas[i] = 0;
 	}
 }
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] powerpc: convert put_page() to put_user_page*()
  2019-08-05  2:38 [PATCH] powerpc: convert put_page() to put_user_page*() john.hubbard
@ 2019-08-07 23:24 ` kbuild test robot
  2019-08-07 23:34   ` John Hubbard
  0 siblings, 1 reply; 3+ messages in thread
From: kbuild test robot @ 2019-08-07 23:24 UTC (permalink / raw)
  To: john.hubbard
  Cc: Jan Kara, linux-mm, John Hubbard, linuxppc-dev, LKML,
	Christoph Hellwig, Jason Gunthorpe, Jerome Glisse, kbuild-all,
	linux-fsdevel, Andrew Morton, Ira Weiny, Christoph Hellwig

[-- Attachment #1: Type: text/plain, Size: 6865 bytes --]

Hi,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.3-rc3 next-20190807]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/powerpc-convert-put_page-to-put_user_page/20190805-132131
config: powerpc-allmodconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/powerpc/kvm/book3s_64_mmu_radix.c: In function 'kvmppc_book3s_instantiate_page':
>> arch/powerpc/kvm/book3s_64_mmu_radix.c:879:4: error: too many arguments to function 'put_user_pages_dirty_lock'
       put_user_pages_dirty_lock(&page, 1, dirty);
       ^~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from arch/powerpc/include/asm/io.h:29:0,
                    from include/linux/io.h:13,
                    from include/linux/irq.h:20,
                    from arch/powerpc/include/asm/hardirq.h:6,
                    from include/linux/hardirq.h:9,
                    from include/linux/kvm_host.h:7,
                    from arch/powerpc/kvm/book3s_64_mmu_radix.c:10:
   include/linux/mm.h:1061:6: note: declared here
    void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
         ^~~~~~~~~~~~~~~~~~~~~~~~~
--
   arch/powerpc/mm/book3s64/iommu_api.c: In function 'mm_iommu_unpin':
>> arch/powerpc/mm/book3s64/iommu_api.c:220:3: error: too many arguments to function 'put_user_pages_dirty_lock'
      put_user_pages_dirty_lock(&page, 1, dirty);
      ^~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from include/linux/migrate.h:5:0,
                    from arch/powerpc/mm/book3s64/iommu_api.c:13:
   include/linux/mm.h:1061:6: note: declared here
    void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
         ^~~~~~~~~~~~~~~~~~~~~~~~~

vim +/put_user_pages_dirty_lock +879 arch/powerpc/kvm/book3s_64_mmu_radix.c

   765	
   766	int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
   767					   unsigned long gpa,
   768					   struct kvm_memory_slot *memslot,
   769					   bool writing, bool kvm_ro,
   770					   pte_t *inserted_pte, unsigned int *levelp)
   771	{
   772		struct kvm *kvm = vcpu->kvm;
   773		struct page *page = NULL;
   774		unsigned long mmu_seq;
   775		unsigned long hva, gfn = gpa >> PAGE_SHIFT;
   776		bool upgrade_write = false;
   777		bool *upgrade_p = &upgrade_write;
   778		pte_t pte, *ptep;
   779		unsigned int shift, level;
   780		int ret;
   781		bool large_enable;
   782	
   783		/* used to check for invalidations in progress */
   784		mmu_seq = kvm->mmu_notifier_seq;
   785		smp_rmb();
   786	
   787		/*
   788		 * Do a fast check first, since __gfn_to_pfn_memslot doesn't
   789		 * do it with !atomic && !async, which is how we call it.
   790		 * We always ask for write permission since the common case
   791		 * is that the page is writable.
   792		 */
   793		hva = gfn_to_hva_memslot(memslot, gfn);
   794		if (!kvm_ro && __get_user_pages_fast(hva, 1, 1, &page) == 1) {
   795			upgrade_write = true;
   796		} else {
   797			unsigned long pfn;
   798	
   799			/* Call KVM generic code to do the slow-path check */
   800			pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
   801						   writing, upgrade_p);
   802			if (is_error_noslot_pfn(pfn))
   803				return -EFAULT;
   804			page = NULL;
   805			if (pfn_valid(pfn)) {
   806				page = pfn_to_page(pfn);
   807				if (PageReserved(page))
   808					page = NULL;
   809			}
   810		}
   811	
   812		/*
   813		 * Read the PTE from the process' radix tree and use that
   814		 * so we get the shift and attribute bits.
   815		 */
   816		local_irq_disable();
   817		ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
   818		/*
   819		 * If the PTE disappeared temporarily due to a THP
   820		 * collapse, just return and let the guest try again.
   821		 */
   822		if (!ptep) {
   823			local_irq_enable();
   824			if (page) {
   825				if (upgrade_write)
   826					put_user_page(page);
   827				else
   828					put_page(page);
   829			}
   830			return RESUME_GUEST;
   831		}
   832		pte = *ptep;
   833		local_irq_enable();
   834	
   835		/* If we're logging dirty pages, always map single pages */
   836		large_enable = !(memslot->flags & KVM_MEM_LOG_DIRTY_PAGES);
   837	
   838		/* Get pte level from shift/size */
   839		if (large_enable && shift == PUD_SHIFT &&
   840		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
   841		    (hva & (PUD_SIZE - PAGE_SIZE))) {
   842			level = 2;
   843		} else if (large_enable && shift == PMD_SHIFT &&
   844			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
   845			   (hva & (PMD_SIZE - PAGE_SIZE))) {
   846			level = 1;
   847		} else {
   848			level = 0;
   849			if (shift > PAGE_SHIFT) {
   850				/*
   851				 * If the pte maps more than one page, bring over
   852				 * bits from the virtual address to get the real
   853				 * address of the specific single page we want.
   854				 */
   855				unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;
   856				pte = __pte(pte_val(pte) | (hva & rpnmask));
   857			}
   858		}
   859	
   860		pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
   861		if (writing || upgrade_write) {
   862			if (pte_val(pte) & _PAGE_WRITE)
   863				pte = __pte(pte_val(pte) | _PAGE_DIRTY);
   864		} else {
   865			pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
   866		}
   867	
   868		/* Allocate space in the tree and write the PTE */
   869		ret = kvmppc_create_pte(kvm, kvm->arch.pgtable, pte, gpa, level,
   870					mmu_seq, kvm->arch.lpid, NULL, NULL);
   871		if (inserted_pte)
   872			*inserted_pte = pte;
   873		if (levelp)
   874			*levelp = level;
   875	
   876		if (page) {
   877			bool dirty = !ret && (pte_val(pte) & _PAGE_WRITE);
   878			if (upgrade_write)
 > 879				put_user_pages_dirty_lock(&page, 1, dirty);
   880			else {
   881				if (dirty)
   882					set_page_dirty_lock(page);
   883				put_page(page);
   884			}
   885		}
   886	
   887		/* Increment number of large pages if we (successfully) inserted one */
   888		if (!ret) {
   889			if (level == 1)
   890				kvm->stat.num_2M_pages++;
   891			else if (level == 2)
   892				kvm->stat.num_1G_pages++;
   893		}
   894	
   895		return ret;
   896	}
   897	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 62277 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] powerpc: convert put_page() to put_user_page*()
  2019-08-07 23:24 ` kbuild test robot
@ 2019-08-07 23:34   ` John Hubbard
  0 siblings, 0 replies; 3+ messages in thread
From: John Hubbard @ 2019-08-07 23:34 UTC (permalink / raw)
  To: kbuild test robot, john.hubbard
  Cc: Jan Kara, linux-mm, linuxppc-dev, LKML, Christoph Hellwig,
	Jason Gunthorpe, Jerome Glisse, kbuild-all, linux-fsdevel,
	Andrew Morton, Ira Weiny, Christoph Hellwig

On 8/7/19 4:24 PM, kbuild test robot wrote:
> Hi,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on linus/master]
> [cannot apply to v5.3-rc3 next-20190807]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/powerpc-convert-put_page-to-put_user_page/20190805-132131
> config: powerpc-allmodconfig (attached as .config)
> compiler: powerpc64-linux-gcc (GCC) 7.4.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         GCC_VERSION=7.4.0 make.cross ARCH=powerpc 
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
> 
> All errors (new ones prefixed by >>):
> 
>    arch/powerpc/kvm/book3s_64_mmu_radix.c: In function 'kvmppc_book3s_instantiate_page':
>>> arch/powerpc/kvm/book3s_64_mmu_radix.c:879:4: error: too many arguments to function 'put_user_pages_dirty_lock'
>        put_user_pages_dirty_lock(&page, 1, dirty);
>        ^~~~~~~~~~~~~~~~~~~~~~~~~

Yep, I should have included the prerequisite patch. But this is obsolete now,
please refer to the larger patchset instead:

   https://lore.kernel.org/r/20190807013340.9706-1-jhubbard@nvidia.com

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-08-08  0:15 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-05  2:38 [PATCH] powerpc: convert put_page() to put_user_page*() john.hubbard
2019-08-07 23:24 ` kbuild test robot
2019-08-07 23:34   ` John Hubbard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).