linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] mm, drm/ttm: Fix pte insertion with customized protection
@ 2019-12-06  8:24 Thomas Hellström (VMware)
  2019-12-06  8:24 ` [PATCH v3 1/2] mm: Add a vmf_insert_mixed_prot() function Thomas Hellström (VMware)
  2019-12-06  8:24 ` [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling Thomas Hellström (VMware)
  0 siblings, 2 replies; 6+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-06  8:24 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Andrew Morton, Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

From: Thomas Hellstrom <thellstrom@vmware.com>

The drm/ttm module is using a modified on-stack copy of the
struct vm_area_struct to be able to set a page protection with customized
caching. Fix that by adding a vmf_insert_mixed_prot() function similar
to the existing vmf_insert_pfn_prot() for use with drm/ttm.

I'd like to merge this through a drm tree.

Changes since v1:
*) Formatting fixes in patch 1
*) Updated commit message of patch 2.
Changes since v2:
*) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
*) Documented under which conditions it's safe to use a page protection
   different from struct vm_area_struct::vm_page_prot. (Michal Hocko)

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>

-- 
2.21.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] mm: Add a vmf_insert_mixed_prot() function
  2019-12-06  8:24 [PATCH v3 0/2] mm, drm/ttm: Fix pte insertion with customized protection Thomas Hellström (VMware)
@ 2019-12-06  8:24 ` Thomas Hellström (VMware)
  2019-12-06  8:24 ` [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling Thomas Hellström (VMware)
  1 sibling, 0 replies; 6+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-06  8:24 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Andrew Morton, Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

From: Thomas Hellstrom <thellstrom@vmware.com>

The TTM module today uses a hack to be able to set a different page
protection than struct vm_area_struct::vm_page_prot. To be able to do
this properly, add the needed vm functionality as vmf_insert_mixed_prot().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Christian König <christian.koenig@amd.com>
---
 include/linux/mm.h |  2 ++
 mm/memory.c        | 14 ++++++++++----
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index cc292273e6ba..29575d3c1e47 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn, pgprot_t pgprot);
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 			pfn_t pfn);
+vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+			pfn_t pfn, pgprot_t pgprot);
 vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
 		unsigned long addr, pfn_t pfn);
 int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
diff --git a/mm/memory.c b/mm/memory.c
index b1ca51a079f2..b9e7f1d56b1c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1719,9 +1719,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)
 }
 
 static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
-		unsigned long addr, pfn_t pfn, bool mkwrite)
+		unsigned long addr, pfn_t pfn, pgprot_t pgprot,
+		bool mkwrite)
 {
-	pgprot_t pgprot = vma->vm_page_prot;
 	int err;
 
 	BUG_ON(!vm_mixed_ok(vma, pfn));
@@ -1764,10 +1764,16 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
 	return VM_FAULT_NOPAGE;
 }
 
+vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+				 pfn_t pfn, pgprot_t pgprot)
+{
+	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
+}
+
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		pfn_t pfn)
 {
-	return __vm_insert_mixed(vma, addr, pfn, false);
+	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false);
 }
 EXPORT_SYMBOL(vmf_insert_mixed);
 
@@ -1779,7 +1785,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
 vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
 		unsigned long addr, pfn_t pfn)
 {
-	return __vm_insert_mixed(vma, addr, pfn, true);
+	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true);
 }
 EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling
  2019-12-06  8:24 [PATCH v3 0/2] mm, drm/ttm: Fix pte insertion with customized protection Thomas Hellström (VMware)
  2019-12-06  8:24 ` [PATCH v3 1/2] mm: Add a vmf_insert_mixed_prot() function Thomas Hellström (VMware)
@ 2019-12-06  8:24 ` Thomas Hellström (VMware)
  2019-12-06 10:30   ` Michal Hocko
  1 sibling, 1 reply; 6+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-06  8:24 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Andrew Morton, Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

From: Thomas Hellstrom <thellstrom@vmware.com>

TTM graphics buffer objects may, transparently to user-space,  move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- and
encryption bits may change and be different from those of
struct vm_area_struct::vm_page_prot.

We were using an ugly hack to set the page protection correctly.
Fix that and instead export and use vmf_insert_mixed_prot() or use
vmf_insert_pfn_prot().
Also get the default page protection from
struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
This way we catch modifications done by the vm system for drivers that
want write-notification.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo_vm.c | 28 +++++++++++++++++++++-------
 mm/memory.c                     |  1 +
 2 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index e6495ca2630b..35d0a0e7aacc 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -173,7 +173,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 				    pgoff_t num_prefault)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct vm_area_struct cvma = *vma;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	struct ttm_bo_device *bdev = bo->bdev;
 	unsigned long page_offset;
@@ -244,7 +243,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		goto out_io_unlock;
 	}
 
-	cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, prot);
+	prot = ttm_io_prot(bo->mem.placement, prot);
 	if (!bo->mem.bus.is_iomem) {
 		struct ttm_operation_ctx ctx = {
 			.interruptible = false,
@@ -260,7 +259,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		}
 	} else {
 		/* Iomem should not be marked encrypted */
-		cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
+		prot = pgprot_decrypted(prot);
 	}
 
 	/*
@@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 			pfn = page_to_pfn(page);
 		}
 
+		/*
+		 * Note that the value of @prot at this point may differ from
+		 * the value of @vma->vm_page_prot in the caching- and
+		 * encryption bits. This is because the exact location of the
+		 * data may not be known at mmap() time and may also change
+		 * at arbitrary times while the data is mmap'ed.
+		 * This is ok as long as @vma->vm_page_prot is not used by
+		 * the core vm to set caching- and encryption bits.
+		 * This is ensured by core vm using pte_modify() to modify
+		 * page table entry protection bits (that function preserves
+		 * old caching- and encryption bits), and the @fault
+		 * callback being the only function that creates new
+		 * page table entries.
+		 */
 		if (vma->vm_flags & VM_MIXEDMAP)
-			ret = vmf_insert_mixed(&cvma, address,
-					__pfn_to_pfn_t(pfn, PFN_DEV));
+			ret = vmf_insert_mixed_prot(vma, address,
+						    __pfn_to_pfn_t(pfn, PFN_DEV),
+						    prot);
 		else
-			ret = vmf_insert_pfn(&cvma, address, pfn);
+			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
 
 		/* Never error on prefaulted PTEs */
 		if (unlikely((ret & VM_FAULT_ERROR))) {
@@ -319,7 +333,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
 	if (ret)
 		return ret;
 
-	prot = vm_get_page_prot(vma->vm_flags);
+	prot = vma->vm_page_prot;
 	ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
 		return ret;
diff --git a/mm/memory.c b/mm/memory.c
index b9e7f1d56b1c..4c26c27afb0a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1769,6 +1769,7 @@ vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
 {
 	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
 }
+EXPORT_SYMBOL(vmf_insert_mixed_prot);
 
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		pfn_t pfn)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling
  2019-12-06  8:24 ` [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling Thomas Hellström (VMware)
@ 2019-12-06 10:30   ` Michal Hocko
  2019-12-06 14:16     ` Thomas Hellstrom
  0 siblings, 1 reply; 6+ messages in thread
From: Michal Hocko @ 2019-12-06 10:30 UTC (permalink / raw)
  To: Thomas Hellström (VMware)
  Cc: linux-mm, linux-kernel, dri-devel, pv-drivers,
	linux-graphics-maintainer, Thomas Hellstrom, Andrew Morton,
	Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote:
[...]
> @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>  			pfn = page_to_pfn(page);
>  		}
>  
> +		/*
> +		 * Note that the value of @prot at this point may differ from
> +		 * the value of @vma->vm_page_prot in the caching- and
> +		 * encryption bits. This is because the exact location of the
> +		 * data may not be known at mmap() time and may also change
> +		 * at arbitrary times while the data is mmap'ed.
> +		 * This is ok as long as @vma->vm_page_prot is not used by
> +		 * the core vm to set caching- and encryption bits.
> +		 * This is ensured by core vm using pte_modify() to modify
> +		 * page table entry protection bits (that function preserves
> +		 * old caching- and encryption bits), and the @fault
> +		 * callback being the only function that creates new
> +		 * page table entries.
> +		 */

While this is a very valuable piece of information I believe we need to
document this in the generic code where everybody will find it.
vmf_insert_mixed_prot sounds like a good place to me. So being explicit
about VM_MIXEDMAP. Also a reference from vm_page_prot to this function
would be really helpeful.

Thanks!

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling
  2019-12-06 10:30   ` Michal Hocko
@ 2019-12-06 14:16     ` Thomas Hellstrom
  2019-12-06 14:24       ` Michal Hocko
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Hellstrom @ 2019-12-06 14:16 UTC (permalink / raw)
  To: mhocko, thomas_os
  Cc: linux-kernel, kirill.shutemov, willy, linux-mm, christian.koenig,
	akpm, Pv-drivers, rcampbell, dri-devel, jglisse,
	Linux-graphics-maintainer

Hi Michal,

On Fri, 2019-12-06 at 11:30 +0100, Michal Hocko wrote:
> On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote:
> [...]
> > @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct
> > vm_fault *vmf,
> >  			pfn = page_to_pfn(page);
> >  		}
> >  
> > +		/*
> > +		 * Note that the value of @prot at this point may
> > differ from
> > +		 * the value of @vma->vm_page_prot in the caching- and
> > +		 * encryption bits. This is because the exact location
> > of the
> > +		 * data may not be known at mmap() time and may also
> > change
> > +		 * at arbitrary times while the data is mmap'ed.
> > +		 * This is ok as long as @vma->vm_page_prot is not used
> > by
> > +		 * the core vm to set caching- and encryption bits.
> > +		 * This is ensured by core vm using pte_modify() to
> > modify
> > +		 * page table entry protection bits (that function
> > preserves
> > +		 * old caching- and encryption bits), and the @fault
> > +		 * callback being the only function that creates new
> > +		 * page table entries.
> > +		 */
> 
> While this is a very valuable piece of information I believe we need
> to
> document this in the generic code where everybody will find it.
> vmf_insert_mixed_prot sounds like a good place to me. So being
> explicit
> about VM_MIXEDMAP. Also a reference from vm_page_prot to this
> function
> would be really helpeful.
> 
> Thanks!
> 

Just to make sure I understand correctly. You'd prefer this (or
similar) text to be present at the vmf_insert_mixed_prot() and
vmf_insert_pfn_prot() definitions for MIXEDMAP and PFNMAP respectively,
and a pointer from vm_page_prot to that text. Is that correct?

Thanks,
Thomas



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling
  2019-12-06 14:16     ` Thomas Hellstrom
@ 2019-12-06 14:24       ` Michal Hocko
  0 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2019-12-06 14:24 UTC (permalink / raw)
  To: Thomas Hellstrom
  Cc: thomas_os, linux-kernel, kirill.shutemov, willy, linux-mm,
	christian.koenig, akpm, Pv-drivers, rcampbell, dri-devel,
	jglisse, Linux-graphics-maintainer

On Fri 06-12-19 14:16:10, Thomas Hellstrom wrote:
> Hi Michal,
> 
> On Fri, 2019-12-06 at 11:30 +0100, Michal Hocko wrote:
> > On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote:
> > [...]
> > > @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct
> > > vm_fault *vmf,
> > >  			pfn = page_to_pfn(page);
> > >  		}
> > >  
> > > +		/*
> > > +		 * Note that the value of @prot at this point may
> > > differ from
> > > +		 * the value of @vma->vm_page_prot in the caching- and
> > > +		 * encryption bits. This is because the exact location
> > > of the
> > > +		 * data may not be known at mmap() time and may also
> > > change
> > > +		 * at arbitrary times while the data is mmap'ed.
> > > +		 * This is ok as long as @vma->vm_page_prot is not used
> > > by
> > > +		 * the core vm to set caching- and encryption bits.
> > > +		 * This is ensured by core vm using pte_modify() to
> > > modify
> > > +		 * page table entry protection bits (that function
> > > preserves
> > > +		 * old caching- and encryption bits), and the @fault
> > > +		 * callback being the only function that creates new
> > > +		 * page table entries.
> > > +		 */
> > 
> > While this is a very valuable piece of information I believe we need
> > to
> > document this in the generic code where everybody will find it.
> > vmf_insert_mixed_prot sounds like a good place to me. So being
> > explicit
> > about VM_MIXEDMAP. Also a reference from vm_page_prot to this
> > function
> > would be really helpeful.
> > 
> > Thanks!
> > 
> 
> Just to make sure I understand correctly. You'd prefer this (or
> similar) text to be present at the vmf_insert_mixed_prot() and
> vmf_insert_pfn_prot() definitions for MIXEDMAP and PFNMAP respectively,
> and a pointer from vm_page_prot to that text. Is that correct?

Yes. You can keep whatever is specific to TTM here but the rest should
be somewhere visible to users of the interface and a note at
vm_page_prot should help anybody touching the generic/core code to not
break those expectations.

Thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-12-06 14:24 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-06  8:24 [PATCH v3 0/2] mm, drm/ttm: Fix pte insertion with customized protection Thomas Hellström (VMware)
2019-12-06  8:24 ` [PATCH v3 1/2] mm: Add a vmf_insert_mixed_prot() function Thomas Hellström (VMware)
2019-12-06  8:24 ` [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling Thomas Hellström (VMware)
2019-12-06 10:30   ` Michal Hocko
2019-12-06 14:16     ` Thomas Hellstrom
2019-12-06 14:24       ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).