All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection
@ 2019-12-12  8:47 ` Thomas Hellström (VMware)
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-12  8:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Andrew Morton, Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

From: Thomas Hellstrom <thellstrom@vmware.com>

The drm/ttm module is using a modified on-stack copy of the
struct vm_area_struct to be able to set a page protection with customized
caching. Fix that by adding a vmf_insert_mixed_prot() function similar
to the existing vmf_insert_pfn_prot() for use with drm/ttm.

I'd like to merge this through a drm tree.

Changes since v1:
*) Formatting fixes in patch 1
*) Updated commit message of patch 2.
Changes since v2:
*) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
*) Documented under which conditions it's safe to use a page protection
   different from struct vm_area_struct::vm_page_prot. (Michal Hocko)
Changes since v3:
*) More documentation regarding under which conditions it's safe to use a
   page protection different from struct vm_area_struct::vm_page_prot. This
   time also in core vm. (Michal Hocko)

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>

-- 
2.21.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection
@ 2019-12-12  8:47 ` Thomas Hellström (VMware)
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-12  8:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: Thomas Hellstrom, Michal Hocko, pv-drivers, Ralph Campbell,
	Matthew Wilcox (Oracle),
	Jérôme Glisse, linux-graphics-maintainer,
	Andrew Morton, Christian König, Kirill A. Shutemov

From: Thomas Hellstrom <thellstrom@vmware.com>

The drm/ttm module is using a modified on-stack copy of the
struct vm_area_struct to be able to set a page protection with customized
caching. Fix that by adding a vmf_insert_mixed_prot() function similar
to the existing vmf_insert_pfn_prot() for use with drm/ttm.

I'd like to merge this through a drm tree.

Changes since v1:
*) Formatting fixes in patch 1
*) Updated commit message of patch 2.
Changes since v2:
*) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
*) Documented under which conditions it's safe to use a page protection
   different from struct vm_area_struct::vm_page_prot. (Michal Hocko)
Changes since v3:
*) More documentation regarding under which conditions it's safe to use a
   page protection different from struct vm_area_struct::vm_page_prot. This
   time also in core vm. (Michal Hocko)

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>

-- 
2.21.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function
  2019-12-12  8:47 ` Thomas Hellström (VMware)
@ 2019-12-12  8:47   ` Thomas Hellström (VMware)
  -1 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-12  8:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Andrew Morton, Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

From: Thomas Hellstrom <thellstrom@vmware.com>

The TTM module today uses a hack to be able to set a different page
protection than struct vm_area_struct::vm_page_prot. To be able to do
this properly, add the needed vm functionality as vmf_insert_mixed_prot().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Christian König <christian.koenig@amd.com>
---
 include/linux/mm.h       |  2 ++
 include/linux/mm_types.h |  7 ++++++-
 mm/memory.c              | 43 ++++++++++++++++++++++++++++++++++++----
 3 files changed, 47 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index cc292273e6ba..29575d3c1e47 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn, pgprot_t pgprot);
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 			pfn_t pfn);
+vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+			pfn_t pfn, pgprot_t pgprot);
 vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
 		unsigned long addr, pfn_t pfn);
 int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2222fa795284..ac96afdbb4bc 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -307,7 +307,12 @@ struct vm_area_struct {
 	/* Second cache line starts here. */
 
 	struct mm_struct *vm_mm;	/* The address space we belong to. */
-	pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
+
+	/*
+	 * Access permissions of this VMA.
+	 * See vmf_insert_mixed() for discussion.
+	 */
+	pgprot_t vm_page_prot;
 	unsigned long vm_flags;		/* Flags, see mm.h. */
 
 	/*
diff --git a/mm/memory.c b/mm/memory.c
index b1ca51a079f2..269a8a871e83 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1646,6 +1646,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
  * vmf_insert_pfn_prot should only be used if using multiple VMAs is
  * impractical.
  *
+ * See vmf_insert_mixed_prot() for a discussion of the implication of using
+ * a value of @pgprot different from that of @vma->vm_page_prot.
+ *
  * Context: Process context.  May allocate using %GFP_KERNEL.
  * Return: vm_fault_t value.
  */
@@ -1719,9 +1722,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)
 }
 
 static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
-		unsigned long addr, pfn_t pfn, bool mkwrite)
+		unsigned long addr, pfn_t pfn, pgprot_t pgprot,
+		bool mkwrite)
 {
-	pgprot_t pgprot = vma->vm_page_prot;
 	int err;
 
 	BUG_ON(!vm_mixed_ok(vma, pfn));
@@ -1764,10 +1767,42 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
 	return VM_FAULT_NOPAGE;
 }
 
+/**
+ * vmf_insert_mixed_prot - insert single pfn into user vma with specified pgprot
+ * @vma: user vma to map to
+ * @addr: target user address of this page
+ * @pfn: source kernel pfn
+ * @pgprot: pgprot flags for the inserted page
+ *
+ * This is exactly like vmf_insert_mixed(), except that it allows drivers to
+ * to override pgprot on a per-page basis.
+ *
+ * Typically this function should be used by drivers to set caching- and
+ * encryption bits different than those of @vma->vm_page_prot, because
+ * the caching- or encryption mode may not be known at mmap() time.
+ * This is ok as long as @vma->vm_page_prot is not used by the core vm
+ * to set caching and encryption bits for those vmas (except for COW pages).
+ * This is ensured by core vm only modifying these page table entries using
+ * functions that don't touch caching- or encryption bits, using pte_modify()
+ * if needed. (See for example mprotect()).
+ * Also when new page-table entries are created, this is only done using the
+ * fault() callback, and never using the value of vma->vm_page_prot,
+ * except for page-table entries that point to anonymous pages as the result
+ * of COW.
+ *
+ * Context: Process context.  May allocate using %GFP_KERNEL.
+ * Return: vm_fault_t value.
+ */
+vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+				 pfn_t pfn, pgprot_t pgprot)
+{
+	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
+}
+
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		pfn_t pfn)
 {
-	return __vm_insert_mixed(vma, addr, pfn, false);
+	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false);
 }
 EXPORT_SYMBOL(vmf_insert_mixed);
 
@@ -1779,7 +1814,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
 vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
 		unsigned long addr, pfn_t pfn)
 {
-	return __vm_insert_mixed(vma, addr, pfn, true);
+	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true);
 }
 EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function
@ 2019-12-12  8:47   ` Thomas Hellström (VMware)
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-12  8:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: Thomas Hellstrom, Michal Hocko, pv-drivers, Ralph Campbell,
	Matthew Wilcox (Oracle),
	Jérôme Glisse, linux-graphics-maintainer,
	Andrew Morton, Christian König, Kirill A. Shutemov

From: Thomas Hellstrom <thellstrom@vmware.com>

The TTM module today uses a hack to be able to set a different page
protection than struct vm_area_struct::vm_page_prot. To be able to do
this properly, add the needed vm functionality as vmf_insert_mixed_prot().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Christian König <christian.koenig@amd.com>
---
 include/linux/mm.h       |  2 ++
 include/linux/mm_types.h |  7 ++++++-
 mm/memory.c              | 43 ++++++++++++++++++++++++++++++++++++----
 3 files changed, 47 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index cc292273e6ba..29575d3c1e47 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn, pgprot_t pgprot);
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 			pfn_t pfn);
+vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+			pfn_t pfn, pgprot_t pgprot);
 vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
 		unsigned long addr, pfn_t pfn);
 int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2222fa795284..ac96afdbb4bc 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -307,7 +307,12 @@ struct vm_area_struct {
 	/* Second cache line starts here. */
 
 	struct mm_struct *vm_mm;	/* The address space we belong to. */
-	pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
+
+	/*
+	 * Access permissions of this VMA.
+	 * See vmf_insert_mixed() for discussion.
+	 */
+	pgprot_t vm_page_prot;
 	unsigned long vm_flags;		/* Flags, see mm.h. */
 
 	/*
diff --git a/mm/memory.c b/mm/memory.c
index b1ca51a079f2..269a8a871e83 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1646,6 +1646,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
  * vmf_insert_pfn_prot should only be used if using multiple VMAs is
  * impractical.
  *
+ * See vmf_insert_mixed_prot() for a discussion of the implication of using
+ * a value of @pgprot different from that of @vma->vm_page_prot.
+ *
  * Context: Process context.  May allocate using %GFP_KERNEL.
  * Return: vm_fault_t value.
  */
@@ -1719,9 +1722,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)
 }
 
 static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
-		unsigned long addr, pfn_t pfn, bool mkwrite)
+		unsigned long addr, pfn_t pfn, pgprot_t pgprot,
+		bool mkwrite)
 {
-	pgprot_t pgprot = vma->vm_page_prot;
 	int err;
 
 	BUG_ON(!vm_mixed_ok(vma, pfn));
@@ -1764,10 +1767,42 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
 	return VM_FAULT_NOPAGE;
 }
 
+/**
+ * vmf_insert_mixed_prot - insert single pfn into user vma with specified pgprot
+ * @vma: user vma to map to
+ * @addr: target user address of this page
+ * @pfn: source kernel pfn
+ * @pgprot: pgprot flags for the inserted page
+ *
+ * This is exactly like vmf_insert_mixed(), except that it allows drivers to
+ * to override pgprot on a per-page basis.
+ *
+ * Typically this function should be used by drivers to set caching- and
+ * encryption bits different than those of @vma->vm_page_prot, because
+ * the caching- or encryption mode may not be known at mmap() time.
+ * This is ok as long as @vma->vm_page_prot is not used by the core vm
+ * to set caching and encryption bits for those vmas (except for COW pages).
+ * This is ensured by core vm only modifying these page table entries using
+ * functions that don't touch caching- or encryption bits, using pte_modify()
+ * if needed. (See for example mprotect()).
+ * Also when new page-table entries are created, this is only done using the
+ * fault() callback, and never using the value of vma->vm_page_prot,
+ * except for page-table entries that point to anonymous pages as the result
+ * of COW.
+ *
+ * Context: Process context.  May allocate using %GFP_KERNEL.
+ * Return: vm_fault_t value.
+ */
+vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+				 pfn_t pfn, pgprot_t pgprot)
+{
+	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
+}
+
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		pfn_t pfn)
 {
-	return __vm_insert_mixed(vma, addr, pfn, false);
+	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false);
 }
 EXPORT_SYMBOL(vmf_insert_mixed);
 
@@ -1779,7 +1814,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
 vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
 		unsigned long addr, pfn_t pfn)
 {
-	return __vm_insert_mixed(vma, addr, pfn, true);
+	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true);
 }
 EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
 
-- 
2.21.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/2] mm, drm/ttm: Fix vm page protection handling
  2019-12-12  8:47 ` Thomas Hellström (VMware)
@ 2019-12-12  8:47   ` Thomas Hellström (VMware)
  -1 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-12  8:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Andrew Morton, Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

From: Thomas Hellstrom <thellstrom@vmware.com>

TTM graphics buffer objects may, transparently to user-space,  move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- and
encryption bits may change and be different from those of
struct vm_area_struct::vm_page_prot.

We were using an ugly hack to set the page protection correctly.
Fix that and instead export and use vmf_insert_mixed_prot() or use
vmf_insert_pfn_prot().
Also get the default page protection from
struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
This way we catch modifications done by the vm system for drivers that
want write-notification.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo_vm.c | 22 +++++++++++++++-------
 mm/memory.c                     |  1 +
 2 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 3e8c3de91ae4..78bfab81cf04 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -173,7 +173,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 				    pgoff_t num_prefault)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct vm_area_struct cvma = *vma;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	struct ttm_bo_device *bdev = bo->bdev;
 	unsigned long page_offset;
@@ -244,7 +243,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		goto out_io_unlock;
 	}
 
-	cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, prot);
+	prot = ttm_io_prot(bo->mem.placement, prot);
 	if (!bo->mem.bus.is_iomem) {
 		struct ttm_operation_ctx ctx = {
 			.interruptible = false,
@@ -260,7 +259,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		}
 	} else {
 		/* Iomem should not be marked encrypted */
-		cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
+		prot = pgprot_decrypted(prot);
 	}
 
 	/*
@@ -283,11 +282,20 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 			pfn = page_to_pfn(page);
 		}
 
+		/*
+		 * Note that the value of @prot at this point may differ from
+		 * the value of @vma->vm_page_prot in the caching- and
+		 * encryption bits. This is because the exact location of the
+		 * data may not be known at mmap() time and may also change
+		 * at arbitrary times while the data is mmap'ed.
+		 * See vmf_insert_mixed_prot() for a discussion.
+		 */
 		if (vma->vm_flags & VM_MIXEDMAP)
-			ret = vmf_insert_mixed(&cvma, address,
-					__pfn_to_pfn_t(pfn, PFN_DEV));
+			ret = vmf_insert_mixed_prot(vma, address,
+						    __pfn_to_pfn_t(pfn, PFN_DEV),
+						    prot);
 		else
-			ret = vmf_insert_pfn(&cvma, address, pfn);
+			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
 
 		/* Never error on prefaulted PTEs */
 		if (unlikely((ret & VM_FAULT_ERROR))) {
@@ -319,7 +327,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
 	if (ret)
 		return ret;
 
-	prot = vm_get_page_prot(vma->vm_flags);
+	prot = vma->vm_page_prot;
 	ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
 		return ret;
diff --git a/mm/memory.c b/mm/memory.c
index 269a8a871e83..0f262acd1018 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1798,6 +1798,7 @@ vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
 {
 	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
 }
+EXPORT_SYMBOL(vmf_insert_mixed_prot);
 
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		pfn_t pfn)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/2] mm, drm/ttm: Fix vm page protection handling
@ 2019-12-12  8:47   ` Thomas Hellström (VMware)
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-12  8:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel
  Cc: Thomas Hellstrom, Michal Hocko, pv-drivers, Ralph Campbell,
	Matthew Wilcox (Oracle),
	Jérôme Glisse, linux-graphics-maintainer,
	Andrew Morton, Christian König, Kirill A. Shutemov

From: Thomas Hellstrom <thellstrom@vmware.com>

TTM graphics buffer objects may, transparently to user-space,  move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- and
encryption bits may change and be different from those of
struct vm_area_struct::vm_page_prot.

We were using an ugly hack to set the page protection correctly.
Fix that and instead export and use vmf_insert_mixed_prot() or use
vmf_insert_pfn_prot().
Also get the default page protection from
struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
This way we catch modifications done by the vm system for drivers that
want write-notification.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo_vm.c | 22 +++++++++++++++-------
 mm/memory.c                     |  1 +
 2 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 3e8c3de91ae4..78bfab81cf04 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -173,7 +173,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 				    pgoff_t num_prefault)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct vm_area_struct cvma = *vma;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	struct ttm_bo_device *bdev = bo->bdev;
 	unsigned long page_offset;
@@ -244,7 +243,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		goto out_io_unlock;
 	}
 
-	cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, prot);
+	prot = ttm_io_prot(bo->mem.placement, prot);
 	if (!bo->mem.bus.is_iomem) {
 		struct ttm_operation_ctx ctx = {
 			.interruptible = false,
@@ -260,7 +259,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 		}
 	} else {
 		/* Iomem should not be marked encrypted */
-		cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
+		prot = pgprot_decrypted(prot);
 	}
 
 	/*
@@ -283,11 +282,20 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 			pfn = page_to_pfn(page);
 		}
 
+		/*
+		 * Note that the value of @prot at this point may differ from
+		 * the value of @vma->vm_page_prot in the caching- and
+		 * encryption bits. This is because the exact location of the
+		 * data may not be known at mmap() time and may also change
+		 * at arbitrary times while the data is mmap'ed.
+		 * See vmf_insert_mixed_prot() for a discussion.
+		 */
 		if (vma->vm_flags & VM_MIXEDMAP)
-			ret = vmf_insert_mixed(&cvma, address,
-					__pfn_to_pfn_t(pfn, PFN_DEV));
+			ret = vmf_insert_mixed_prot(vma, address,
+						    __pfn_to_pfn_t(pfn, PFN_DEV),
+						    prot);
 		else
-			ret = vmf_insert_pfn(&cvma, address, pfn);
+			ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
 
 		/* Never error on prefaulted PTEs */
 		if (unlikely((ret & VM_FAULT_ERROR))) {
@@ -319,7 +327,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
 	if (ret)
 		return ret;
 
-	prot = vm_get_page_prot(vma->vm_flags);
+	prot = vma->vm_page_prot;
 	ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
 		return ret;
diff --git a/mm/memory.c b/mm/memory.c
index 269a8a871e83..0f262acd1018 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1798,6 +1798,7 @@ vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
 {
 	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
 }
+EXPORT_SYMBOL(vmf_insert_mixed_prot);
 
 vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		pfn_t pfn)
-- 
2.21.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function
  2019-12-12  8:47   ` Thomas Hellström (VMware)
@ 2019-12-12  8:51     ` Thomas Hellstrom
  -1 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellstrom @ 2019-12-12  8:51 UTC (permalink / raw)
  To: Thomas Hellström (VMware), linux-mm, linux-kernel, dri-devel
  Cc: Pv-drivers, Linux-graphics-maintainer, Andrew Morton,
	Michal Hocko, Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

On 12/12/19 9:48 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
>
> The TTM module today uses a hack to be able to set a different page
> protection than struct vm_area_struct::vm_page_prot. To be able to do
> this properly, add the needed vm functionality as vmf_insert_mixed_prot().
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> Acked-by: Christian König <christian.koenig@amd.com>
> ---
>  include/linux/mm.h       |  2 ++
>  include/linux/mm_types.h |  7 ++++++-
>  mm/memory.c              | 43 ++++++++++++++++++++++++++++++++++++----
>  3 files changed, 47 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cc292273e6ba..29575d3c1e47 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
>  			unsigned long pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
>  			pfn_t pfn);
> +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
> +			pfn_t pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
>  		unsigned long addr, pfn_t pfn);
>  int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2222fa795284..ac96afdbb4bc 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -307,7 +307,12 @@ struct vm_area_struct {
>  	/* Second cache line starts here. */
>  
>  	struct mm_struct *vm_mm;	/* The address space we belong to. */
> -	pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
> +
> +	/*
> +	 * Access permissions of this VMA.
> +	 * See vmf_insert_mixed() for discussion.

Typo. will of course be vmf_insert_mixed_prot() in the final version.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function
@ 2019-12-12  8:51     ` Thomas Hellstrom
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellstrom @ 2019-12-12  8:51 UTC (permalink / raw)
  To: Thomas Hellström (VMware), linux-mm, linux-kernel, dri-devel
  Cc: Ralph Campbell, Michal Hocko, Pv-drivers, Matthew Wilcox (Oracle),
	Jérôme Glisse, Linux-graphics-maintainer,
	Andrew Morton, Christian König, Kirill A. Shutemov

On 12/12/19 9:48 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
>
> The TTM module today uses a hack to be able to set a different page
> protection than struct vm_area_struct::vm_page_prot. To be able to do
> this properly, add the needed vm functionality as vmf_insert_mixed_prot().
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> Acked-by: Christian König <christian.koenig@amd.com>
> ---
>  include/linux/mm.h       |  2 ++
>  include/linux/mm_types.h |  7 ++++++-
>  mm/memory.c              | 43 ++++++++++++++++++++++++++++++++++++----
>  3 files changed, 47 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cc292273e6ba..29575d3c1e47 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
>  			unsigned long pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
>  			pfn_t pfn);
> +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
> +			pfn_t pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
>  		unsigned long addr, pfn_t pfn);
>  int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2222fa795284..ac96afdbb4bc 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -307,7 +307,12 @@ struct vm_area_struct {
>  	/* Second cache line starts here. */
>  
>  	struct mm_struct *vm_mm;	/* The address space we belong to. */
> -	pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
> +
> +	/*
> +	 * Access permissions of this VMA.
> +	 * See vmf_insert_mixed() for discussion.

Typo. will of course be vmf_insert_mixed_prot() in the final version.


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Ack to merge through DRM tree? WAS [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection
  2019-12-12  8:47 ` Thomas Hellström (VMware)
@ 2019-12-20  8:06   ` Thomas Hellström (VMware)
  -1 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-20  8:06 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel, Andrew Morton, Michal Hocko
  Cc: pv-drivers, linux-graphics-maintainer, Thomas Hellstrom,
	Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

Andrew,

On 12/12/19 9:47 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
>
> The drm/ttm module is using a modified on-stack copy of the
> struct vm_area_struct to be able to set a page protection with customized
> caching. Fix that by adding a vmf_insert_mixed_prot() function similar
> to the existing vmf_insert_pfn_prot() for use with drm/ttm.
>
> I'd like to merge this through a drm tree.
>
> Changes since v1:
> *) Formatting fixes in patch 1
> *) Updated commit message of patch 2.
> Changes since v2:
> *) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
> *) Documented under which conditions it's safe to use a page protection
>     different from struct vm_area_struct::vm_page_prot. (Michal Hocko)
> Changes since v3:
> *) More documentation regarding under which conditions it's safe to use a
>     page protection different from struct vm_area_struct::vm_page_prot. This
>     time also in core vm. (Michal Hocko)
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: "Christian König" <christian.koenig@amd.com>
>
Seems all concerns with this series have been addressed. Could I have an 
ack to merge this through a DRM tree?

Thanks,

Thomas




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Ack to merge through DRM tree? WAS [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection
@ 2019-12-20  8:06   ` Thomas Hellström (VMware)
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-20  8:06 UTC (permalink / raw)
  To: linux-mm, linux-kernel, dri-devel, Andrew Morton, Michal Hocko
  Cc: Ralph Campbell, pv-drivers, Thomas Hellstrom,
	Matthew Wilcox (Oracle),
	Jérôme Glisse, linux-graphics-maintainer,
	Christian König, Kirill A. Shutemov

Andrew,

On 12/12/19 9:47 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
>
> The drm/ttm module is using a modified on-stack copy of the
> struct vm_area_struct to be able to set a page protection with customized
> caching. Fix that by adding a vmf_insert_mixed_prot() function similar
> to the existing vmf_insert_pfn_prot() for use with drm/ttm.
>
> I'd like to merge this through a drm tree.
>
> Changes since v1:
> *) Formatting fixes in patch 1
> *) Updated commit message of patch 2.
> Changes since v2:
> *) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
> *) Documented under which conditions it's safe to use a page protection
>     different from struct vm_area_struct::vm_page_prot. (Michal Hocko)
> Changes since v3:
> *) More documentation regarding under which conditions it's safe to use a
>     page protection different from struct vm_area_struct::vm_page_prot. This
>     time also in core vm. (Michal Hocko)
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: "Christian König" <christian.koenig@amd.com>
>
Seems all concerns with this series have been addressed. Could I have an 
ack to merge this through a DRM tree?

Thanks,

Thomas



_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function
  2019-12-12  8:47   ` Thomas Hellström (VMware)
@ 2019-12-20  8:23     ` Michal Hocko
  -1 siblings, 0 replies; 14+ messages in thread
From: Michal Hocko @ 2019-12-20  8:23 UTC (permalink / raw)
  To: Thomas Hellström (VMware)
  Cc: linux-mm, linux-kernel, dri-devel, pv-drivers,
	linux-graphics-maintainer, Thomas Hellstrom, Andrew Morton,
	Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

On Thu 12-12-19 09:47:40, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
> 
> The TTM module today uses a hack to be able to set a different page
> protection than struct vm_area_struct::vm_page_prot. To be able to do
> this properly, add the needed vm functionality as vmf_insert_mixed_prot().
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> Acked-by: Christian König <christian.koenig@amd.com>

I cannot say I am happy about this because it adds a discrepancy and
that is always tricky but I do agree that a formalized discrepancy is
better than ad-hoc hacks so

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks for extending the documentation.

> ---
>  include/linux/mm.h       |  2 ++
>  include/linux/mm_types.h |  7 ++++++-
>  mm/memory.c              | 43 ++++++++++++++++++++++++++++++++++++----
>  3 files changed, 47 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cc292273e6ba..29575d3c1e47 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
>  			unsigned long pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
>  			pfn_t pfn);
> +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
> +			pfn_t pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
>  		unsigned long addr, pfn_t pfn);
>  int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2222fa795284..ac96afdbb4bc 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -307,7 +307,12 @@ struct vm_area_struct {
>  	/* Second cache line starts here. */
>  
>  	struct mm_struct *vm_mm;	/* The address space we belong to. */
> -	pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
> +
> +	/*
> +	 * Access permissions of this VMA.
> +	 * See vmf_insert_mixed() for discussion.
> +	 */
> +	pgprot_t vm_page_prot;
>  	unsigned long vm_flags;		/* Flags, see mm.h. */
>  
>  	/*
> diff --git a/mm/memory.c b/mm/memory.c
> index b1ca51a079f2..269a8a871e83 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1646,6 +1646,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>   * vmf_insert_pfn_prot should only be used if using multiple VMAs is
>   * impractical.
>   *
> + * See vmf_insert_mixed_prot() for a discussion of the implication of using
> + * a value of @pgprot different from that of @vma->vm_page_prot.
> + *
>   * Context: Process context.  May allocate using %GFP_KERNEL.
>   * Return: vm_fault_t value.
>   */
> @@ -1719,9 +1722,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)
>  }
>  
>  static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
> -		unsigned long addr, pfn_t pfn, bool mkwrite)
> +		unsigned long addr, pfn_t pfn, pgprot_t pgprot,
> +		bool mkwrite)
>  {
> -	pgprot_t pgprot = vma->vm_page_prot;
>  	int err;
>  
>  	BUG_ON(!vm_mixed_ok(vma, pfn));
> @@ -1764,10 +1767,42 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
>  	return VM_FAULT_NOPAGE;
>  }
>  
> +/**
> + * vmf_insert_mixed_prot - insert single pfn into user vma with specified pgprot
> + * @vma: user vma to map to
> + * @addr: target user address of this page
> + * @pfn: source kernel pfn
> + * @pgprot: pgprot flags for the inserted page
> + *
> + * This is exactly like vmf_insert_mixed(), except that it allows drivers to
> + * to override pgprot on a per-page basis.
> + *
> + * Typically this function should be used by drivers to set caching- and
> + * encryption bits different than those of @vma->vm_page_prot, because
> + * the caching- or encryption mode may not be known at mmap() time.
> + * This is ok as long as @vma->vm_page_prot is not used by the core vm
> + * to set caching and encryption bits for those vmas (except for COW pages).
> + * This is ensured by core vm only modifying these page table entries using
> + * functions that don't touch caching- or encryption bits, using pte_modify()
> + * if needed. (See for example mprotect()).
> + * Also when new page-table entries are created, this is only done using the
> + * fault() callback, and never using the value of vma->vm_page_prot,
> + * except for page-table entries that point to anonymous pages as the result
> + * of COW.
> + *
> + * Context: Process context.  May allocate using %GFP_KERNEL.
> + * Return: vm_fault_t value.
> + */
> +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
> +				 pfn_t pfn, pgprot_t pgprot)
> +{
> +	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
> +}
> +
>  vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
>  		pfn_t pfn)
>  {
> -	return __vm_insert_mixed(vma, addr, pfn, false);
> +	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false);
>  }
>  EXPORT_SYMBOL(vmf_insert_mixed);
>  
> @@ -1779,7 +1814,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
>  vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
>  		unsigned long addr, pfn_t pfn)
>  {
> -	return __vm_insert_mixed(vma, addr, pfn, true);
> +	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true);
>  }
>  EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
>  
> -- 
> 2.21.0

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function
@ 2019-12-20  8:23     ` Michal Hocko
  0 siblings, 0 replies; 14+ messages in thread
From: Michal Hocko @ 2019-12-20  8:23 UTC (permalink / raw)
  To: Thomas Hellström (VMware)
  Cc: Thomas Hellstrom, pv-drivers, linux-kernel, dri-devel, linux-mm,
	Jérôme Glisse, linux-graphics-maintainer,
	Matthew Wilcox (Oracle),
	Andrew Morton, Ralph Campbell, Christian König,
	Kirill A. Shutemov

On Thu 12-12-19 09:47:40, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
> 
> The TTM module today uses a hack to be able to set a different page
> protection than struct vm_area_struct::vm_page_prot. To be able to do
> this properly, add the needed vm functionality as vmf_insert_mixed_prot().
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> Acked-by: Christian König <christian.koenig@amd.com>

I cannot say I am happy about this because it adds a discrepancy and
that is always tricky but I do agree that a formalized discrepancy is
better than ad-hoc hacks so

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks for extending the documentation.

> ---
>  include/linux/mm.h       |  2 ++
>  include/linux/mm_types.h |  7 ++++++-
>  mm/memory.c              | 43 ++++++++++++++++++++++++++++++++++++----
>  3 files changed, 47 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cc292273e6ba..29575d3c1e47 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
>  			unsigned long pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
>  			pfn_t pfn);
> +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
> +			pfn_t pfn, pgprot_t pgprot);
>  vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
>  		unsigned long addr, pfn_t pfn);
>  int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2222fa795284..ac96afdbb4bc 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -307,7 +307,12 @@ struct vm_area_struct {
>  	/* Second cache line starts here. */
>  
>  	struct mm_struct *vm_mm;	/* The address space we belong to. */
> -	pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
> +
> +	/*
> +	 * Access permissions of this VMA.
> +	 * See vmf_insert_mixed() for discussion.
> +	 */
> +	pgprot_t vm_page_prot;
>  	unsigned long vm_flags;		/* Flags, see mm.h. */
>  
>  	/*
> diff --git a/mm/memory.c b/mm/memory.c
> index b1ca51a079f2..269a8a871e83 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1646,6 +1646,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>   * vmf_insert_pfn_prot should only be used if using multiple VMAs is
>   * impractical.
>   *
> + * See vmf_insert_mixed_prot() for a discussion of the implication of using
> + * a value of @pgprot different from that of @vma->vm_page_prot.
> + *
>   * Context: Process context.  May allocate using %GFP_KERNEL.
>   * Return: vm_fault_t value.
>   */
> @@ -1719,9 +1722,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)
>  }
>  
>  static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
> -		unsigned long addr, pfn_t pfn, bool mkwrite)
> +		unsigned long addr, pfn_t pfn, pgprot_t pgprot,
> +		bool mkwrite)
>  {
> -	pgprot_t pgprot = vma->vm_page_prot;
>  	int err;
>  
>  	BUG_ON(!vm_mixed_ok(vma, pfn));
> @@ -1764,10 +1767,42 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
>  	return VM_FAULT_NOPAGE;
>  }
>  
> +/**
> + * vmf_insert_mixed_prot - insert single pfn into user vma with specified pgprot
> + * @vma: user vma to map to
> + * @addr: target user address of this page
> + * @pfn: source kernel pfn
> + * @pgprot: pgprot flags for the inserted page
> + *
> + * This is exactly like vmf_insert_mixed(), except that it allows drivers to
> + * to override pgprot on a per-page basis.
> + *
> + * Typically this function should be used by drivers to set caching- and
> + * encryption bits different than those of @vma->vm_page_prot, because
> + * the caching- or encryption mode may not be known at mmap() time.
> + * This is ok as long as @vma->vm_page_prot is not used by the core vm
> + * to set caching and encryption bits for those vmas (except for COW pages).
> + * This is ensured by core vm only modifying these page table entries using
> + * functions that don't touch caching- or encryption bits, using pte_modify()
> + * if needed. (See for example mprotect()).
> + * Also when new page-table entries are created, this is only done using the
> + * fault() callback, and never using the value of vma->vm_page_prot,
> + * except for page-table entries that point to anonymous pages as the result
> + * of COW.
> + *
> + * Context: Process context.  May allocate using %GFP_KERNEL.
> + * Return: vm_fault_t value.
> + */
> +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
> +				 pfn_t pfn, pgprot_t pgprot)
> +{
> +	return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
> +}
> +
>  vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
>  		pfn_t pfn)
>  {
> -	return __vm_insert_mixed(vma, addr, pfn, false);
> +	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false);
>  }
>  EXPORT_SYMBOL(vmf_insert_mixed);
>  
> @@ -1779,7 +1814,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
>  vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
>  		unsigned long addr, pfn_t pfn)
>  {
> -	return __vm_insert_mixed(vma, addr, pfn, true);
> +	return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true);
>  }
>  EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
>  
> -- 
> 2.21.0

-- 
Michal Hocko
SUSE Labs
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Ack to merge through DRM tree? WAS [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection
  2019-12-20  8:06   ` Thomas Hellström (VMware)
@ 2019-12-23 22:34     ` Andrew Morton
  -1 siblings, 0 replies; 14+ messages in thread
From: Andrew Morton @ 2019-12-23 22:34 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: linux-mm, linux-kernel, dri-devel, Michal Hocko, pv-drivers,
	linux-graphics-maintainer, Thomas Hellstrom,
	Matthew Wilcox (Oracle),
	Kirill A. Shutemov, Ralph Campbell, Jérôme Glisse,
	Christian König

On Fri, 20 Dec 2019 09:06:08 +0100 Thomas Hellström (VMware) <thomas_os@shipmail.org> wrote:

> Andrew,
> 
> On 12/12/19 9:47 AM, Thomas Hellström (VMware) wrote:
> > From: Thomas Hellstrom <thellstrom@vmware.com>
> >
> > The drm/ttm module is using a modified on-stack copy of the
> > struct vm_area_struct to be able to set a page protection with customized
> > caching. Fix that by adding a vmf_insert_mixed_prot() function similar
> > to the existing vmf_insert_pfn_prot() for use with drm/ttm.
> >
> > I'd like to merge this through a drm tree.
> >
> > Changes since v1:
> > *) Formatting fixes in patch 1
> > *) Updated commit message of patch 2.
> > Changes since v2:
> > *) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
> > *) Documented under which conditions it's safe to use a page protection
> >     different from struct vm_area_struct::vm_page_prot. (Michal Hocko)
> > Changes since v3:
> > *) More documentation regarding under which conditions it's safe to use a
> >     page protection different from struct vm_area_struct::vm_page_prot. This
> >     time also in core vm. (Michal Hocko)
> >
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > Cc: "Christian König" <christian.koenig@amd.com>
> >
> Seems all concerns with this series have been addressed. Could I have an 
> ack to merge this through a DRM tree?
> 

Yes, please do that.

Acked-by: Andrew Morton <akpm@linux-foundation.org>

for both.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Ack to merge through DRM tree? WAS [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection
@ 2019-12-23 22:34     ` Andrew Morton
  0 siblings, 0 replies; 14+ messages in thread
From: Andrew Morton @ 2019-12-23 22:34 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Thomas Hellstrom, Michal Hocko, pv-drivers, linux-kernel,
	dri-devel, linux-mm, Jérôme Glisse,
	linux-graphics-maintainer, Matthew Wilcox (Oracle),
	Ralph Campbell, Christian König, Kirill A. Shutemov

On Fri, 20 Dec 2019 09:06:08 +0100 Thomas Hellström (VMware) <thomas_os@shipmail.org> wrote:

> Andrew,
> 
> On 12/12/19 9:47 AM, Thomas Hellström (VMware) wrote:
> > From: Thomas Hellstrom <thellstrom@vmware.com>
> >
> > The drm/ttm module is using a modified on-stack copy of the
> > struct vm_area_struct to be able to set a page protection with customized
> > caching. Fix that by adding a vmf_insert_mixed_prot() function similar
> > to the existing vmf_insert_pfn_prot() for use with drm/ttm.
> >
> > I'd like to merge this through a drm tree.
> >
> > Changes since v1:
> > *) Formatting fixes in patch 1
> > *) Updated commit message of patch 2.
> > Changes since v2:
> > *) Moved vmf_insert_mixed_prot() export to patch 2 (Michal Hocko)
> > *) Documented under which conditions it's safe to use a page protection
> >     different from struct vm_area_struct::vm_page_prot. (Michal Hocko)
> > Changes since v3:
> > *) More documentation regarding under which conditions it's safe to use a
> >     page protection different from struct vm_area_struct::vm_page_prot. This
> >     time also in core vm. (Michal Hocko)
> >
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > Cc: "Christian König" <christian.koenig@amd.com>
> >
> Seems all concerns with this series have been addressed. Could I have an 
> ack to merge this through a DRM tree?
> 

Yes, please do that.

Acked-by: Andrew Morton <akpm@linux-foundation.org>

for both.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-12-23 22:34 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-12  8:47 [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection Thomas Hellström (VMware)
2019-12-12  8:47 ` Thomas Hellström (VMware)
2019-12-12  8:47 ` [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function Thomas Hellström (VMware)
2019-12-12  8:47   ` Thomas Hellström (VMware)
2019-12-12  8:51   ` Thomas Hellstrom
2019-12-12  8:51     ` Thomas Hellstrom
2019-12-20  8:23   ` Michal Hocko
2019-12-20  8:23     ` Michal Hocko
2019-12-12  8:47 ` [PATCH v4 2/2] mm, drm/ttm: Fix vm page protection handling Thomas Hellström (VMware)
2019-12-12  8:47   ` Thomas Hellström (VMware)
2019-12-20  8:06 ` Ack to merge through DRM tree? WAS [PATCH v4 0/2] mm, drm/ttm: Fix pte insertion with customized protection Thomas Hellström (VMware)
2019-12-20  8:06   ` Thomas Hellström (VMware)
2019-12-23 22:34   ` Andrew Morton
2019-12-23 22:34     ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.