linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] mmu notifier contextual informations
@ 2018-12-03 20:18 jglisse
  2018-12-03 20:18 ` [PATCH 1/3] mm/mmu_notifier: use structure for invalidate_range_start/end callback jglisse
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: jglisse @ 2018-12-03 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, linux-kernel, Jérôme Glisse,
	Matthew Wilcox, Ross Zwisler, Jan Kara, Dan Williams,
	Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

From: Jérôme Glisse <jglisse@redhat.com>

This patchset add contextual information, why an invalidation is
happening, to mmu notifier callback. This is necessary for user
of mmu notifier that wish to maintains their own data structure
without having to add new fields to struct vm_area_struct (vma).

For instance device can have they own page table that mirror the
process address space. When a vma is unmap (munmap() syscall) the
device driver can free the device page table for the range.

Today we do not have any information on why a mmu notifier call
back is happening and thus device driver have to assume that it
is always an munmap(). This is inefficient at it means that it
needs to re-allocate device page table on next page fault and
rebuild the whole device driver data structure for the range.

Other use case beside munmap() also exist, for instance it is
pointless for device driver to invalidate the device page table
when the invalidation is for the soft dirtyness tracking. Or
device driver can optimize away mprotect() that change the page
table permission access for the range.

This patchset enable all this optimizations for device driver.
I do not include any of those in this serie but other patchset
i am posting will leverage this.


From code point of view the patchset is pretty simple, the first
two patches consolidate all mmu notifier arguments into a struct
so that it is easier to add/change arguments. The last patch adds
the contextual information (munmap, protection, soft dirty, clear,
...).

Cheers,
Jérôme

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <zwisler@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: kvm@vger.kernel.org
Cc: linux-rdma@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org

Jérôme Glisse (3):
  mm/mmu_notifier: use structure for invalidate_range_start/end callback
  mm/mmu_notifier: use structure for invalidate_range_start/end calls
  mm/mmu_notifier: contextual information for event triggering
    invalidation

 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c  |  43 ++++-----
 drivers/gpu/drm/i915/i915_gem_userptr.c |  14 ++-
 drivers/gpu/drm/radeon/radeon_mn.c      |  16 ++--
 drivers/infiniband/core/umem_odp.c      |  20 ++---
 drivers/infiniband/hw/hfi1/mmu_rb.c     |  13 ++-
 drivers/misc/mic/scif/scif_dma.c        |  11 +--
 drivers/misc/sgi-gru/grutlbpurge.c      |  14 ++-
 drivers/xen/gntdev.c                    |  12 +--
 fs/dax.c                                |  11 ++-
 fs/proc/task_mmu.c                      |  10 ++-
 include/linux/mm.h                      |   4 +-
 include/linux/mmu_notifier.h            | 106 +++++++++++++++-------
 kernel/events/uprobes.c                 |  13 +--
 mm/hmm.c                                |  23 ++---
 mm/huge_memory.c                        |  58 ++++++------
 mm/hugetlb.c                            |  63 +++++++------
 mm/khugepaged.c                         |  13 +--
 mm/ksm.c                                |  26 +++---
 mm/madvise.c                            |  22 ++---
 mm/memory.c                             | 112 ++++++++++++++----------
 mm/migrate.c                            |  30 ++++---
 mm/mmu_notifier.c                       |  22 +++--
 mm/mprotect.c                           |  17 ++--
 mm/mremap.c                             |  14 +--
 mm/oom_kill.c                           |  20 +++--
 mm/rmap.c                               |  34 ++++---
 virt/kvm/kvm_main.c                     |  14 ++-
 27 files changed, 421 insertions(+), 334 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/3] mm/mmu_notifier: use structure for invalidate_range_start/end callback
  2018-12-03 20:18 [PATCH 0/3] mmu notifier contextual informations jglisse
@ 2018-12-03 20:18 ` jglisse
  2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: jglisse @ 2018-12-03 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, linux-kernel, Jérôme Glisse,
	Matthew Wilcox, Ross Zwisler, Jan Kara, Dan Williams,
	Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

From: Jérôme Glisse <jglisse@redhat.com>

To avoid having to change many callback definition everytime we want
to add a parameter use a structure to group all parameters for the
mmu_notifier invalidate_range_start/end callback. No functional changes
with this patch.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <zwisler@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: kvm@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linux-rdma@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c  | 43 +++++++++++--------------
 drivers/gpu/drm/i915/i915_gem_userptr.c | 14 ++++----
 drivers/gpu/drm/radeon/radeon_mn.c      | 16 ++++-----
 drivers/infiniband/core/umem_odp.c      | 20 +++++-------
 drivers/infiniband/hw/hfi1/mmu_rb.c     | 13 +++-----
 drivers/misc/mic/scif/scif_dma.c        | 11 ++-----
 drivers/misc/sgi-gru/grutlbpurge.c      | 14 ++++----
 drivers/xen/gntdev.c                    | 12 +++----
 include/linux/mmu_notifier.h            | 14 +++++---
 mm/hmm.c                                | 23 ++++++-------
 mm/mmu_notifier.c                       | 21 ++++++++++--
 virt/kvm/kvm_main.c                     | 14 +++-----
 12 files changed, 102 insertions(+), 113 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index e55508b39496..5bc7e59a05a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -246,36 +246,34 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node,
  * potentially dirty.
  */
 static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn,
-						 struct mm_struct *mm,
-						 unsigned long start,
-						 unsigned long end,
-						 bool blockable)
+			const struct mmu_notifier_range *range)
 {
 	struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn);
 	struct interval_tree_node *it;
+	unsigned long end;
 
 	/* notification is exclusive, but interval is inclusive */
-	end -= 1;
+	end = range->end - 1;
 
 	/* TODO we should be able to split locking for interval tree and
 	 * amdgpu_mn_invalidate_node
 	 */
-	if (amdgpu_mn_read_lock(amn, blockable))
+	if (amdgpu_mn_read_lock(amn, range->blockable))
 		return -EAGAIN;
 
-	it = interval_tree_iter_first(&amn->objects, start, end);
+	it = interval_tree_iter_first(&amn->objects, range->start, end);
 	while (it) {
 		struct amdgpu_mn_node *node;
 
-		if (!blockable) {
+		if (!range->blockable) {
 			amdgpu_mn_read_unlock(amn);
 			return -EAGAIN;
 		}
 
 		node = container_of(it, struct amdgpu_mn_node, it);
-		it = interval_tree_iter_next(it, start, end);
+		it = interval_tree_iter_next(it, range->start, end);
 
-		amdgpu_mn_invalidate_node(node, start, end);
+		amdgpu_mn_invalidate_node(node, range->start, end);
 	}
 
 	return 0;
@@ -294,39 +292,38 @@ static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn,
  * are restorted in amdgpu_mn_invalidate_range_end_hsa.
  */
 static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn,
-						 struct mm_struct *mm,
-						 unsigned long start,
-						 unsigned long end,
-						 bool blockable)
+			const struct mmu_notifier_range *range)
 {
 	struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn);
 	struct interval_tree_node *it;
+	unsigned long end;
 
 	/* notification is exclusive, but interval is inclusive */
-	end -= 1;
+	end = range->end - 1;
 
-	if (amdgpu_mn_read_lock(amn, blockable))
+	if (amdgpu_mn_read_lock(amn, range->blockable))
 		return -EAGAIN;
 
-	it = interval_tree_iter_first(&amn->objects, start, end);
+	it = interval_tree_iter_first(&amn->objects, range->start, end);
 	while (it) {
 		struct amdgpu_mn_node *node;
 		struct amdgpu_bo *bo;
 
-		if (!blockable) {
+		if (!range->blockable) {
 			amdgpu_mn_read_unlock(amn);
 			return -EAGAIN;
 		}
 
 		node = container_of(it, struct amdgpu_mn_node, it);
-		it = interval_tree_iter_next(it, start, end);
+		it = interval_tree_iter_next(it, range->start, end);
 
 		list_for_each_entry(bo, &node->bos, mn_list) {
 			struct kgd_mem *mem = bo->kfd_bo;
 
 			if (amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm,
-							 start, end))
-				amdgpu_amdkfd_evict_userptr(mem, mm);
+							 range->start,
+							 end))
+				amdgpu_amdkfd_evict_userptr(mem, range->mm);
 		}
 	}
 
@@ -344,9 +341,7 @@ static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn,
  * Release the lock again to allow new command submissions.
  */
 static void amdgpu_mn_invalidate_range_end(struct mmu_notifier *mn,
-					   struct mm_struct *mm,
-					   unsigned long start,
-					   unsigned long end)
+			const struct mmu_notifier_range *range)
 {
 	struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn);
 
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 2c9b284036d1..3df77020aada 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -113,27 +113,25 @@ static void del_object(struct i915_mmu_object *mo)
 }
 
 static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
-						       struct mm_struct *mm,
-						       unsigned long start,
-						       unsigned long end,
-						       bool blockable)
+			const struct mmu_notifier_range *range)
 {
 	struct i915_mmu_notifier *mn =
 		container_of(_mn, struct i915_mmu_notifier, mn);
 	struct i915_mmu_object *mo;
 	struct interval_tree_node *it;
 	LIST_HEAD(cancelled);
+	unsigned long end;
 
 	if (RB_EMPTY_ROOT(&mn->objects.rb_root))
 		return 0;
 
 	/* interval ranges are inclusive, but invalidate range is exclusive */
-	end--;
+	end = range->end - 1;
 
 	spin_lock(&mn->lock);
-	it = interval_tree_iter_first(&mn->objects, start, end);
+	it = interval_tree_iter_first(&mn->objects, range->start, end);
 	while (it) {
-		if (!blockable) {
+		if (!range->blockable) {
 			spin_unlock(&mn->lock);
 			return -EAGAIN;
 		}
@@ -151,7 +149,7 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
 			queue_work(mn->wq, &mo->work);
 
 		list_add(&mo->link, &cancelled);
-		it = interval_tree_iter_next(it, start, end);
+		it = interval_tree_iter_next(it, range->start, end);
 	}
 	list_for_each_entry(mo, &cancelled, link)
 		del_object(mo);
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
index f8b35df44c60..b3019505065a 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -119,40 +119,38 @@ static void radeon_mn_release(struct mmu_notifier *mn,
  * unmap them by move them into system domain again.
  */
 static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn,
-					     struct mm_struct *mm,
-					     unsigned long start,
-					     unsigned long end,
-					     bool blockable)
+				const struct mmu_notifier_range *range)
 {
 	struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn);
 	struct ttm_operation_ctx ctx = { false, false };
 	struct interval_tree_node *it;
+	unsigned long end;
 	int ret = 0;
 
 	/* notification is exclusive, but interval is inclusive */
-	end -= 1;
+	end = range->end - 1;
 
 	/* TODO we should be able to split locking for interval tree and
 	 * the tear down.
 	 */
-	if (blockable)
+	if (range->blockable)
 		mutex_lock(&rmn->lock);
 	else if (!mutex_trylock(&rmn->lock))
 		return -EAGAIN;
 
-	it = interval_tree_iter_first(&rmn->objects, start, end);
+	it = interval_tree_iter_first(&rmn->objects, range->start, end);
 	while (it) {
 		struct radeon_mn_node *node;
 		struct radeon_bo *bo;
 		long r;
 
-		if (!blockable) {
+		if (!range->blockable) {
 			ret = -EAGAIN;
 			goto out_unlock;
 		}
 
 		node = container_of(it, struct radeon_mn_node, it);
-		it = interval_tree_iter_next(it, start, end);
+		it = interval_tree_iter_next(it, range->start, end);
 
 		list_for_each_entry(bo, &node->bos, mn_list) {
 
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 676c1fd1119d..25db6ff68c70 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -146,15 +146,12 @@ static int invalidate_range_start_trampoline(struct ib_umem_odp *item,
 }
 
 static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
-						    struct mm_struct *mm,
-						    unsigned long start,
-						    unsigned long end,
-						    bool blockable)
+				const struct mmu_notifier_range *range)
 {
 	struct ib_ucontext_per_mm *per_mm =
 		container_of(mn, struct ib_ucontext_per_mm, mn);
 
-	if (blockable)
+	if (range->blockable)
 		down_read(&per_mm->umem_rwsem);
 	else if (!down_read_trylock(&per_mm->umem_rwsem))
 		return -EAGAIN;
@@ -169,9 +166,10 @@ static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
 		return 0;
 	}
 
-	return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, start, end,
+	return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start,
+					     range->end,
 					     invalidate_range_start_trampoline,
-					     blockable, NULL);
+					     range->blockable, NULL);
 }
 
 static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start,
@@ -182,9 +180,7 @@ static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start,
 }
 
 static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn,
-						  struct mm_struct *mm,
-						  unsigned long start,
-						  unsigned long end)
+				const struct mmu_notifier_range *range)
 {
 	struct ib_ucontext_per_mm *per_mm =
 		container_of(mn, struct ib_ucontext_per_mm, mn);
@@ -192,8 +188,8 @@ static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn,
 	if (unlikely(!per_mm->active))
 		return;
 
-	rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, start,
-				      end,
+	rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start,
+				      range->end,
 				      invalidate_range_end_trampoline, true, NULL);
 	up_read(&per_mm->umem_rwsem);
 }
diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
index 475b769e120c..14d2a90964c3 100644
--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
+++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
@@ -68,8 +68,7 @@ struct mmu_rb_handler {
 static unsigned long mmu_node_start(struct mmu_rb_node *);
 static unsigned long mmu_node_last(struct mmu_rb_node *);
 static int mmu_notifier_range_start(struct mmu_notifier *,
-				     struct mm_struct *,
-				     unsigned long, unsigned long, bool);
+		const struct mmu_notifier_range *);
 static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,
 					   unsigned long, unsigned long);
 static void do_remove(struct mmu_rb_handler *handler,
@@ -284,10 +283,7 @@ void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler,
 }
 
 static int mmu_notifier_range_start(struct mmu_notifier *mn,
-				     struct mm_struct *mm,
-				     unsigned long start,
-				     unsigned long end,
-				     bool blockable)
+		const struct mmu_notifier_range *range)
 {
 	struct mmu_rb_handler *handler =
 		container_of(mn, struct mmu_rb_handler, mn);
@@ -297,10 +293,11 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
 	bool added = false;
 
 	spin_lock_irqsave(&handler->lock, flags);
-	for (node = __mmu_int_rb_iter_first(root, start, end - 1);
+	for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1);
 	     node; node = ptr) {
 		/* Guard against node removal. */
-		ptr = __mmu_int_rb_iter_next(node, start, end - 1);
+		ptr = __mmu_int_rb_iter_next(node, range->start,
+					     range->end - 1);
 		trace_hfi1_mmu_mem_invalidate(node->addr, node->len);
 		if (handler->ops->invalidate(handler->ops_arg, node)) {
 			__mmu_int_rb_remove(node, root);
diff --git a/drivers/misc/mic/scif/scif_dma.c b/drivers/misc/mic/scif/scif_dma.c
index 18b8ed57c4ac..e0d97044d0e9 100644
--- a/drivers/misc/mic/scif/scif_dma.c
+++ b/drivers/misc/mic/scif/scif_dma.c
@@ -201,23 +201,18 @@ static void scif_mmu_notifier_release(struct mmu_notifier *mn,
 }
 
 static int scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
-						     struct mm_struct *mm,
-						     unsigned long start,
-						     unsigned long end,
-						     bool blockable)
+					const struct mmu_notifier_range *range)
 {
 	struct scif_mmu_notif	*mmn;
 
 	mmn = container_of(mn, struct scif_mmu_notif, ep_mmu_notifier);
-	scif_rma_destroy_tcw(mmn, start, end - start);
+	scif_rma_destroy_tcw(mmn, range->start, range->end - range->start);
 
 	return 0;
 }
 
 static void scif_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
-						   struct mm_struct *mm,
-						   unsigned long start,
-						   unsigned long end)
+			const struct mmu_notifier_range *range)
 {
 	/*
 	 * Nothing to do here, everything needed was done in
diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
index 03b49d52092e..ca2032afe035 100644
--- a/drivers/misc/sgi-gru/grutlbpurge.c
+++ b/drivers/misc/sgi-gru/grutlbpurge.c
@@ -220,9 +220,7 @@ void gru_flush_all_tlb(struct gru_state *gru)
  * MMUOPS notifier callout functions
  */
 static int gru_invalidate_range_start(struct mmu_notifier *mn,
-				       struct mm_struct *mm,
-				       unsigned long start, unsigned long end,
-				       bool blockable)
+			const struct mmu_notifier_range *range)
 {
 	struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
 						 ms_notifier);
@@ -230,15 +228,14 @@ static int gru_invalidate_range_start(struct mmu_notifier *mn,
 	STAT(mmu_invalidate_range);
 	atomic_inc(&gms->ms_range_active);
 	gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx, act %d\n", gms,
-		start, end, atomic_read(&gms->ms_range_active));
-	gru_flush_tlb_range(gms, start, end - start);
+		range->start, range->end, atomic_read(&gms->ms_range_active));
+	gru_flush_tlb_range(gms, range->start, range->end - range->start);
 
 	return 0;
 }
 
 static void gru_invalidate_range_end(struct mmu_notifier *mn,
-				     struct mm_struct *mm, unsigned long start,
-				     unsigned long end)
+			const struct mmu_notifier_range *range)
 {
 	struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
 						 ms_notifier);
@@ -247,7 +244,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
 	(void)atomic_dec_and_test(&gms->ms_range_active);
 
 	wake_up_all(&gms->ms_wait_queue);
-	gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx\n", gms, start, end);
+	gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx\n",
+		gms, range->start, range->end);
 }
 
 static void gru_release(struct mmu_notifier *mn, struct mm_struct *mm)
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index b0b02a501167..5efc5eee9544 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -520,26 +520,26 @@ static int unmap_if_in_range(struct gntdev_grant_map *map,
 }
 
 static int mn_invl_range_start(struct mmu_notifier *mn,
-				struct mm_struct *mm,
-				unsigned long start, unsigned long end,
-				bool blockable)
+			       const struct mmu_notifier_range *range)
 {
 	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
 	struct gntdev_grant_map *map;
 	int ret = 0;
 
-	if (blockable)
+	if (range->blockable)
 		mutex_lock(&priv->lock);
 	else if (!mutex_trylock(&priv->lock))
 		return -EAGAIN;
 
 	list_for_each_entry(map, &priv->maps, next) {
-		ret = unmap_if_in_range(map, start, end, blockable);
+		ret = unmap_if_in_range(map, range->start, range->end,
+					range->blockable);
 		if (ret)
 			goto out_unlock;
 	}
 	list_for_each_entry(map, &priv->freeable_maps, next) {
-		ret = unmap_if_in_range(map, start, end, blockable);
+		ret = unmap_if_in_range(map, range->start, range->end,
+					range->blockable);
 		if (ret)
 			goto out_unlock;
 	}
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 9893a6432adf..368f0c1a049d 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -25,6 +25,13 @@ struct mmu_notifier_mm {
 	spinlock_t lock;
 };
 
+struct mmu_notifier_range {
+	struct mm_struct *mm;
+	unsigned long start;
+	unsigned long end;
+	bool blockable;
+};
+
 struct mmu_notifier_ops {
 	/*
 	 * Called either by mmu_notifier_unregister or when the mm is
@@ -146,12 +153,9 @@ struct mmu_notifier_ops {
 	 *
 	 */
 	int (*invalidate_range_start)(struct mmu_notifier *mn,
-				       struct mm_struct *mm,
-				       unsigned long start, unsigned long end,
-				       bool blockable);
+				      const struct mmu_notifier_range *range);
 	void (*invalidate_range_end)(struct mmu_notifier *mn,
-				     struct mm_struct *mm,
-				     unsigned long start, unsigned long end);
+				     const struct mmu_notifier_range *range);
 
 	/*
 	 * invalidate_range() is either called between
diff --git a/mm/hmm.c b/mm/hmm.c
index 90c34f3d1243..1965f2caf5eb 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -189,35 +189,30 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 }
 
 static int hmm_invalidate_range_start(struct mmu_notifier *mn,
-				      struct mm_struct *mm,
-				      unsigned long start,
-				      unsigned long end,
-				      bool blockable)
+			const struct mmu_notifier_range *range)
 {
 	struct hmm_update update;
-	struct hmm *hmm = mm->hmm;
+	struct hmm *hmm = range->mm->hmm;
 
 	VM_BUG_ON(!hmm);
 
-	update.start = start;
-	update.end = end;
+	update.start = range->start;
+	update.end = range->end;
 	update.event = HMM_UPDATE_INVALIDATE;
-	update.blockable = blockable;
+	update.blockable = range->blockable;
 	return hmm_invalidate_range(hmm, true, &update);
 }
 
 static void hmm_invalidate_range_end(struct mmu_notifier *mn,
-				     struct mm_struct *mm,
-				     unsigned long start,
-				     unsigned long end)
+			const struct mmu_notifier_range *range)
 {
 	struct hmm_update update;
-	struct hmm *hmm = mm->hmm;
+	struct hmm *hmm = range->mm->hmm;
 
 	VM_BUG_ON(!hmm);
 
-	update.start = start;
-	update.end = end;
+	update.start = range->start;
+	update.end = range->end;
 	update.event = HMM_UPDATE_INVALIDATE;
 	update.blockable = true;
 	hmm_invalidate_range(hmm, false, &update);
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 5119ff846769..5f6665ae3ee2 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -178,14 +178,20 @@ int __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
 				  unsigned long start, unsigned long end,
 				  bool blockable)
 {
+	struct mmu_notifier_range _range, *range = &_range;
 	struct mmu_notifier *mn;
 	int ret = 0;
 	int id;
 
+	range->blockable = blockable;
+	range->start = start;
+	range->end = end;
+	range->mm = mm;
+
 	id = srcu_read_lock(&srcu);
 	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
 		if (mn->ops->invalidate_range_start) {
-			int _ret = mn->ops->invalidate_range_start(mn, mm, start, end, blockable);
+			int _ret = mn->ops->invalidate_range_start(mn, range);
 			if (_ret) {
 				pr_info("%pS callback failed with %d in %sblockable context.\n",
 						mn->ops->invalidate_range_start, _ret,
@@ -205,9 +211,20 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
 					 unsigned long end,
 					 bool only_end)
 {
+	struct mmu_notifier_range _range, *range = &_range;
 	struct mmu_notifier *mn;
 	int id;
 
+	/*
+	 * The end call back will never be call if the start refused to go
+	 * through because of blockable was false so here assume that we
+	 * can block.
+	 */
+	range->blockable = true;
+	range->start = start;
+	range->end = end;
+	range->mm = mm;
+
 	id = srcu_read_lock(&srcu);
 	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
 		/*
@@ -226,7 +243,7 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
 		if (!only_end && mn->ops->invalidate_range)
 			mn->ops->invalidate_range(mn, mm, start, end);
 		if (mn->ops->invalidate_range_end)
-			mn->ops->invalidate_range_end(mn, mm, start, end);
+			mn->ops->invalidate_range_end(mn, range);
 	}
 	srcu_read_unlock(&srcu, id);
 }
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2679e476b6c3..f829f63f2b16 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -360,10 +360,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
 }
 
 static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
-						    struct mm_struct *mm,
-						    unsigned long start,
-						    unsigned long end,
-						    bool blockable)
+					const struct mmu_notifier_range *range)
 {
 	struct kvm *kvm = mmu_notifier_to_kvm(mn);
 	int need_tlb_flush = 0, idx;
@@ -377,7 +374,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 	 * count is also read inside the mmu_lock critical section.
 	 */
 	kvm->mmu_notifier_count++;
-	need_tlb_flush = kvm_unmap_hva_range(kvm, start, end);
+	need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end);
 	need_tlb_flush |= kvm->tlbs_dirty;
 	/* we've to flush the tlb before the pages can be freed */
 	if (need_tlb_flush)
@@ -385,7 +382,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 
 	spin_unlock(&kvm->mmu_lock);
 
-	ret = kvm_arch_mmu_notifier_invalidate_range(kvm, start, end, blockable);
+	ret = kvm_arch_mmu_notifier_invalidate_range(kvm, range->start,
+					range->end, range->blockable);
 
 	srcu_read_unlock(&kvm->srcu, idx);
 
@@ -393,9 +391,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 }
 
 static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
-						  struct mm_struct *mm,
-						  unsigned long start,
-						  unsigned long end)
+					const struct mmu_notifier_range *range)
 {
 	struct kvm *kvm = mmu_notifier_to_kvm(mn);
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-03 20:18 [PATCH 0/3] mmu notifier contextual informations jglisse
  2018-12-03 20:18 ` [PATCH 1/3] mm/mmu_notifier: use structure for invalidate_range_start/end callback jglisse
@ 2018-12-03 20:18 ` jglisse
  2018-12-04  0:09   ` Jerome Glisse
                     ` (3 more replies)
  2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
  2018-12-04  7:35 ` [PATCH 0/3] mmu notifier contextual informations Koenig, Christian
  3 siblings, 4 replies; 17+ messages in thread
From: jglisse @ 2018-12-03 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, linux-kernel, Jérôme Glisse,
	Matthew Wilcox, Ross Zwisler, Jan Kara, Dan Williams,
	Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

From: Jérôme Glisse <jglisse@redhat.com>

To avoid having to change many call sites everytime we want to add a
parameter use a structure to group all parameters for the mmu_notifier
invalidate_range_start/end cakks. No functional changes with this
patch.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <zwisler@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: kvm@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linux-rdma@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
---
 fs/dax.c                     |  10 +++-
 fs/proc/task_mmu.c           |   9 ++-
 include/linux/mm.h           |   4 +-
 include/linux/mmu_notifier.h |  59 +++++++++----------
 kernel/events/uprobes.c      |  12 ++--
 mm/huge_memory.c             |  54 ++++++++++--------
 mm/hugetlb.c                 |  59 ++++++++++---------
 mm/khugepaged.c              |  12 ++--
 mm/ksm.c                     |  24 ++++----
 mm/madvise.c                 |  21 +++----
 mm/memory.c                  | 107 ++++++++++++++++++++---------------
 mm/migrate.c                 |  28 ++++-----
 mm/mmu_notifier.c            |  35 +++---------
 mm/mprotect.c                |  16 ++++--
 mm/mremap.c                  |  13 +++--
 mm/oom_kill.c                |  19 ++++---
 mm/rmap.c                    |  32 +++++++----
 17 files changed, 276 insertions(+), 238 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 9bcce89ea18e..e22508ee19ec 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -758,7 +758,10 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
 
 	i_mmap_lock_read(mapping);
 	vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) {
-		unsigned long address, start, end;
+		struct mmu_notifier_range range;
+		unsigned long address;
+
+		range.mm = vma->vm_mm;
 
 		cond_resched();
 
@@ -772,7 +775,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
 		 * call mmu_notifier_invalidate_range_start() on our behalf
 		 * before taking any lock.
 		 */
-		if (follow_pte_pmd(vma->vm_mm, address, &start, &end, &ptep, &pmdp, &ptl))
+		if (follow_pte_pmd(vma->vm_mm, address, &range,
+				   &ptep, &pmdp, &ptl))
 			continue;
 
 		/*
@@ -814,7 +818,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
 			pte_unmap_unlock(ptep, ptl);
 		}
 
-		mmu_notifier_invalidate_range_end(vma->vm_mm, start, end);
+		mmu_notifier_invalidate_range_end(&range);
 	}
 	i_mmap_unlock_read(mapping);
 }
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 47c3764c469b..53d625925669 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1096,6 +1096,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 		return -ESRCH;
 	mm = get_task_mm(task);
 	if (mm) {
+		struct mmu_notifier_range range;
 		struct clear_refs_private cp = {
 			.type = type,
 		};
@@ -1139,11 +1140,15 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 				downgrade_write(&mm->mmap_sem);
 				break;
 			}
-			mmu_notifier_invalidate_range_start(mm, 0, -1);
+
+			range.start = 0;
+			range.end = -1UL;
+			range.mm = mm;
+			mmu_notifier_invalidate_range_start(&range);
 		}
 		walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
 		if (type == CLEAR_REFS_SOFT_DIRTY)
-			mmu_notifier_invalidate_range_end(mm, 0, -1);
+			mmu_notifier_invalidate_range_end(&range);
 		tlb_finish_mmu(&tlb, 0, -1);
 		up_read(&mm->mmap_sem);
 out_mm:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5411de93a363..e7b6f2b30713 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1397,6 +1397,8 @@ struct mm_walk {
 	void *private;
 };
 
+struct mmu_notifier_range;
+
 int walk_page_range(unsigned long addr, unsigned long end,
 		struct mm_walk *walk);
 int walk_page_vma(struct vm_area_struct *vma, struct mm_walk *walk);
@@ -1405,7 +1407,7 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
 int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
 			struct vm_area_struct *vma);
 int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
-			     unsigned long *start, unsigned long *end,
+		 	     struct mmu_notifier_range *range,
 			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
 	unsigned long *pfn);
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 368f0c1a049d..cbeece8e47d4 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -220,11 +220,8 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm,
 				     unsigned long address);
 extern void __mmu_notifier_change_pte(struct mm_struct *mm,
 				      unsigned long address, pte_t pte);
-extern int __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
-				  unsigned long start, unsigned long end,
-				  bool blockable);
-extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
-				  unsigned long start, unsigned long end,
+extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *);
+extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r,
 				  bool only_end);
 extern void __mmu_notifier_invalidate_range(struct mm_struct *mm,
 				  unsigned long start, unsigned long end);
@@ -268,33 +265,37 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
 		__mmu_notifier_change_pte(mm, address, pte);
 }
 
-static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline void
+mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
 {
-	if (mm_has_notifiers(mm))
-		__mmu_notifier_invalidate_range_start(mm, start, end, true);
+	if (mm_has_notifiers(range->mm)) {
+		range->blockable = true;
+		__mmu_notifier_invalidate_range_start(range);
+	}
 }
 
-static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline int
+mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range)
 {
-	if (mm_has_notifiers(mm))
-		return __mmu_notifier_invalidate_range_start(mm, start, end, false);
+	if (mm_has_notifiers(range->mm)) {
+		range->blockable = false;
+		return __mmu_notifier_invalidate_range_start(range);
+	}
 	return 0;
 }
 
-static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline void
+mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range)
 {
-	if (mm_has_notifiers(mm))
-		__mmu_notifier_invalidate_range_end(mm, start, end, false);
+	if (mm_has_notifiers(range->mm))
+		__mmu_notifier_invalidate_range_end(range, false);
 }
 
-static inline void mmu_notifier_invalidate_range_only_end(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline void
+mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range)
 {
-	if (mm_has_notifiers(mm))
-		__mmu_notifier_invalidate_range_end(mm, start, end, true);
+	if (mm_has_notifiers(range->mm))
+		__mmu_notifier_invalidate_range_end(range, true);
 }
 
 static inline void mmu_notifier_invalidate_range(struct mm_struct *mm,
@@ -455,24 +456,24 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
 {
 }
 
-static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline void
+mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
 {
 }
 
-static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline int
+mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range)
 {
 	return 0;
 }
 
-static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline
+void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range)
 {
 }
 
-static inline void mmu_notifier_invalidate_range_only_end(struct mm_struct *mm,
-				  unsigned long start, unsigned long end)
+static inline void
+mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range)
 {
 }
 
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 322e97bbb437..aa7996ca361e 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -171,11 +171,13 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 		.address = addr,
 	};
 	int err;
-	/* For mmu_notifiers */
-	const unsigned long mmun_start = addr;
-	const unsigned long mmun_end   = addr + PAGE_SIZE;
+	struct mmu_notifier_range range;
 	struct mem_cgroup *memcg;
 
+	range.start = addr;
+	range.end = addr + PAGE_SIZE;
+	range.mm = mm;
+
 	VM_BUG_ON_PAGE(PageTransHuge(old_page), old_page);
 
 	err = mem_cgroup_try_charge(new_page, vma->vm_mm, GFP_KERNEL, &memcg,
@@ -186,7 +188,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 	/* For try_to_free_swap() and munlock_vma_page() below */
 	lock_page(old_page);
 
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_start(&range);
 	err = -EAGAIN;
 	if (!page_vma_mapped_walk(&pvmw)) {
 		mem_cgroup_cancel_charge(new_page, memcg, false);
@@ -220,7 +222,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 
 	err = 0;
  unlock:
-	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 	unlock_page(old_page);
 	return err;
 }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 622cced74fd9..1a7a059dbf7d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1144,8 +1144,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
 	int i;
 	vm_fault_t ret = 0;
 	struct page **pages;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 
 	pages = kmalloc_array(HPAGE_PMD_NR, sizeof(struct page *),
 			      GFP_KERNEL);
@@ -1183,9 +1182,10 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
 		cond_resched();
 	}
 
-	mmun_start = haddr;
-	mmun_end   = haddr + HPAGE_PMD_SIZE;
-	mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
+	range.start = haddr;
+	range.end = range.start + HPAGE_PMD_SIZE;
+	range.mm = vma->vm_mm;
+	mmu_notifier_invalidate_range_start(&range);
 
 	vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
 	if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
@@ -1230,8 +1230,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
 	 * No need to double call mmu_notifier->invalidate_range() callback as
 	 * the above pmdp_huge_clear_flush_notify() did already call it.
 	 */
-	mmu_notifier_invalidate_range_only_end(vma->vm_mm, mmun_start,
-						mmun_end);
+	mmu_notifier_invalidate_range_only_end(&range);
 
 	ret |= VM_FAULT_WRITE;
 	put_page(page);
@@ -1241,7 +1240,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
 
 out_free_pages:
 	spin_unlock(vmf->ptl);
-	mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 	for (i = 0; i < HPAGE_PMD_NR; i++) {
 		memcg = (void *)page_private(pages[i]);
 		set_page_private(pages[i], 0);
@@ -1258,8 +1257,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 	struct page *page = NULL, *new_page;
 	struct mem_cgroup *memcg;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 	gfp_t huge_gfp;			/* for allocation and charge */
 	vm_fault_t ret = 0;
 
@@ -1349,9 +1347,10 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 				    vma, HPAGE_PMD_NR);
 	__SetPageUptodate(new_page);
 
-	mmun_start = haddr;
-	mmun_end   = haddr + HPAGE_PMD_SIZE;
-	mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
+	range.start = haddr;
+	range.end = range.start + HPAGE_PMD_SIZE;
+	range.mm = vma->vm_mm;
+	mmu_notifier_invalidate_range_start(&range);
 
 	spin_lock(vmf->ptl);
 	if (page)
@@ -1386,8 +1385,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 	 * No need to double call mmu_notifier->invalidate_range() callback as
 	 * the above pmdp_huge_clear_flush_notify() did already call it.
 	 */
-	mmu_notifier_invalidate_range_only_end(vma->vm_mm, mmun_start,
-					       mmun_end);
+	mmu_notifier_invalidate_range_only_end(&range);
 out:
 	return ret;
 out_unlock:
@@ -2029,13 +2027,17 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
 {
 	spinlock_t *ptl;
 	struct mm_struct *mm = vma->vm_mm;
-	unsigned long haddr = address & HPAGE_PUD_MASK;
+	struct mmu_notifier_range range;
 
-	mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PUD_SIZE);
+	range.start = address & HPAGE_PUD_MASK;
+	range.end = range.start + HPAGE_PUD_SIZE;
+	range.mm = mm;
+
+	mmu_notifier_invalidate_range_start(&range);
 	ptl = pud_lock(mm, pud);
 	if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud)))
 		goto out;
-	__split_huge_pud_locked(vma, pud, haddr);
+	__split_huge_pud_locked(vma, pud, range.start);
 
 out:
 	spin_unlock(ptl);
@@ -2043,8 +2045,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
 	 * No need to double call mmu_notifier->invalidate_range() callback as
 	 * the above pudp_huge_clear_flush_notify() did already call it.
 	 */
-	mmu_notifier_invalidate_range_only_end(mm, haddr, haddr +
-					       HPAGE_PUD_SIZE);
+	mmu_notifier_invalidate_range_only_end(&range);
 }
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
@@ -2245,9 +2246,13 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 {
 	spinlock_t *ptl;
 	struct mm_struct *mm = vma->vm_mm;
-	unsigned long haddr = address & HPAGE_PMD_MASK;
+	struct mmu_notifier_range range;
+
+	range.start = address & HPAGE_PMD_MASK;
+	range.end = range.start + HPAGE_PMD_SIZE;
+	range.mm = mm;
 
-	mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE);
+	mmu_notifier_invalidate_range_start(&range);
 	ptl = pmd_lock(mm, pmd);
 
 	/*
@@ -2264,7 +2269,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 			clear_page_mlock(page);
 	} else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)))
 		goto out;
-	__split_huge_pmd_locked(vma, pmd, haddr, freeze);
+	__split_huge_pmd_locked(vma, pmd, range.start, freeze);
 out:
 	spin_unlock(ptl);
 	/*
@@ -2280,8 +2285,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	 *     any further changes to individual pte will notify. So no need
 	 *     to call mmu_notifier->invalidate_range()
 	 */
-	mmu_notifier_invalidate_range_only_end(mm, haddr, haddr +
-					       HPAGE_PMD_SIZE);
+	mmu_notifier_invalidate_range_only_end(&range);
 }
 
 void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 705a3e9cc910..4bfbdab44d51 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3239,16 +3239,17 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 	int cow;
 	struct hstate *h = hstate_vma(vma);
 	unsigned long sz = huge_page_size(h);
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 	int ret = 0;
 
 	cow = (vma->vm_flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
 
-	mmun_start = vma->vm_start;
-	mmun_end = vma->vm_end;
+	range.start = vma->vm_start;
+	range.end = vma->vm_end;
+	range.mm = src;
+
 	if (cow)
-		mmu_notifier_invalidate_range_start(src, mmun_start, mmun_end);
+		mmu_notifier_invalidate_range_start(&range);
 
 	for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
 		spinlock_t *src_ptl, *dst_ptl;
@@ -3324,7 +3325,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 	}
 
 	if (cow)
-		mmu_notifier_invalidate_range_end(src, mmun_start, mmun_end);
+		mmu_notifier_invalidate_range_end(&range);
 
 	return ret;
 }
@@ -3341,8 +3342,11 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	struct page *page;
 	struct hstate *h = hstate_vma(vma);
 	unsigned long sz = huge_page_size(h);
-	unsigned long mmun_start = start;	/* For mmu_notifiers */
-	unsigned long mmun_end   = end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
+
+	range.start = start;
+	range.end = end;
+	range.mm = mm;
 
 	WARN_ON(!is_vm_hugetlb_page(vma));
 	BUG_ON(start & ~huge_page_mask(h));
@@ -3358,8 +3362,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	/*
 	 * If sharing possible, alert mmu notifiers of worst case.
 	 */
-	adjust_range_if_pmd_sharing_possible(vma, &mmun_start, &mmun_end);
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
+	mmu_notifier_invalidate_range_start(&range);
 	address = start;
 	for (; address < end; address += sz) {
 		ptep = huge_pte_offset(mm, address, sz);
@@ -3427,7 +3431,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		if (ref_page)
 			break;
 	}
-	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 	tlb_end_vma(tlb, vma);
 }
 
@@ -3545,9 +3549,8 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	struct page *old_page, *new_page;
 	int outside_reserve = 0;
 	vm_fault_t ret = 0;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
 	unsigned long haddr = address & huge_page_mask(h);
+	struct mmu_notifier_range range;
 
 	pte = huge_ptep_get(ptep);
 	old_page = pte_page(pte);
@@ -3626,9 +3629,10 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	__SetPageUptodate(new_page);
 	set_page_huge_active(new_page);
 
-	mmun_start = haddr;
-	mmun_end = mmun_start + huge_page_size(h);
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	range.start = haddr;
+	range.end = range.start + huge_page_size(h);
+	range.mm = mm;
+	mmu_notifier_invalidate_range_start(&range);
 
 	/*
 	 * Retake the page table lock to check for racing updates
@@ -3641,7 +3645,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 
 		/* Break COW */
 		huge_ptep_clear_flush(vma, haddr, ptep);
-		mmu_notifier_invalidate_range(mm, mmun_start, mmun_end);
+		mmu_notifier_invalidate_range(mm, range.start, range.end);
 		set_huge_pte_at(mm, haddr, ptep,
 				make_huge_pte(vma, new_page, 1));
 		page_remove_rmap(old_page, true);
@@ -3650,7 +3654,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 		new_page = old_page;
 	}
 	spin_unlock(ptl);
-	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 out_release_all:
 	restore_reserve_on_error(h, vma, haddr, new_page);
 	put_page(new_page);
@@ -4339,21 +4343,24 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 	pte_t pte;
 	struct hstate *h = hstate_vma(vma);
 	unsigned long pages = 0;
-	unsigned long f_start = start;
-	unsigned long f_end = end;
 	bool shared_pmd = false;
+	struct mmu_notifier_range range;
+
+	range.start = start;
+	range.end = end;
+	range.mm = mm;
 
 	/*
 	 * In the case of shared PMDs, the area to flush could be beyond
-	 * start/end.  Set f_start/f_end to cover the maximum possible
+	 * start/end.  Set range.start/range.end to cover the maximum possible
 	 * range if PMD sharing is possible.
 	 */
-	adjust_range_if_pmd_sharing_possible(vma, &f_start, &f_end);
+	adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
 
 	BUG_ON(address >= end);
-	flush_cache_range(vma, f_start, f_end);
+	flush_cache_range(vma, range.start, range.end);
 
-	mmu_notifier_invalidate_range_start(mm, f_start, f_end);
+	mmu_notifier_invalidate_range_start(&range);
 	i_mmap_lock_write(vma->vm_file->f_mapping);
 	for (; address < end; address += huge_page_size(h)) {
 		spinlock_t *ptl;
@@ -4404,7 +4411,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 	 * did unshare a page of pmds, flush the range corresponding to the pud.
 	 */
 	if (shared_pmd)
-		flush_hugetlb_tlb_range(vma, f_start, f_end);
+		flush_hugetlb_tlb_range(vma, range.start, range.end);
 	else
 		flush_hugetlb_tlb_range(vma, start, end);
 	/*
@@ -4414,7 +4421,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 	 * See Documentation/vm/mmu_notifier.rst
 	 */
 	i_mmap_unlock_write(vma->vm_file->f_mapping);
-	mmu_notifier_invalidate_range_end(mm, f_start, f_end);
+	mmu_notifier_invalidate_range_end(&range);
 
 	return pages << h->order;
 }
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8e2ff195ecb3..e9fe0c9a9f56 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -944,8 +944,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	int isolated = 0, result = 0;
 	struct mem_cgroup *memcg;
 	struct vm_area_struct *vma;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 	gfp_t gfp;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
@@ -1017,9 +1016,10 @@ static void collapse_huge_page(struct mm_struct *mm,
 	pte = pte_offset_map(pmd, address);
 	pte_ptl = pte_lockptr(mm, pmd);
 
-	mmun_start = address;
-	mmun_end   = address + HPAGE_PMD_SIZE;
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	range.start = address;
+	range.end = range.start + HPAGE_PMD_SIZE;
+	range.mm = mm;
+	mmu_notifier_invalidate_range_start(&range);
 	pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
 	/*
 	 * After this gup_fast can't run anymore. This also removes
@@ -1029,7 +1029,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	 */
 	_pmd = pmdp_collapse_flush(vma, address, pmd);
 	spin_unlock(pmd_ptl);
-	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 
 	spin_lock(pte_ptl);
 	isolated = __collapse_huge_page_isolate(vma, address, pte);
diff --git a/mm/ksm.c b/mm/ksm.c
index 5b0894b45ee5..262694d0cd4c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1042,8 +1042,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 	};
 	int swapped;
 	int err = -EFAULT;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 
 	pvmw.address = page_address_in_vma(page, vma);
 	if (pvmw.address == -EFAULT)
@@ -1051,9 +1050,10 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 
 	BUG_ON(PageTransCompound(page));
 
-	mmun_start = pvmw.address;
-	mmun_end   = pvmw.address + PAGE_SIZE;
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	range.start = pvmw.address;
+	range.end = range.start + PAGE_SIZE;
+	range.mm = mm;
+	mmu_notifier_invalidate_range_start(&range);
 
 	if (!page_vma_mapped_walk(&pvmw))
 		goto out_mn;
@@ -1105,7 +1105,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 out_unlock:
 	page_vma_mapped_walk_done(&pvmw);
 out_mn:
-	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 out:
 	return err;
 }
@@ -1129,8 +1129,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 	spinlock_t *ptl;
 	unsigned long addr;
 	int err = -EFAULT;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 
 	addr = page_address_in_vma(page, vma);
 	if (addr == -EFAULT)
@@ -1140,9 +1139,10 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 	if (!pmd)
 		goto out;
 
-	mmun_start = addr;
-	mmun_end   = addr + PAGE_SIZE;
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	range.start = addr;
+	range.end = addr + PAGE_SIZE;
+	range.mm = mm;
+	mmu_notifier_invalidate_range_start(&range);
 
 	ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
 	if (!pte_same(*ptep, orig_pte)) {
@@ -1188,7 +1188,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 	pte_unmap_unlock(ptep, ptl);
 	err = 0;
 out_mn:
-	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 out:
 	return err;
 }
diff --git a/mm/madvise.c b/mm/madvise.c
index 6cb1ca93e290..f20dd80ca21b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -458,29 +458,30 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
 static int madvise_free_single_vma(struct vm_area_struct *vma,
 			unsigned long start_addr, unsigned long end_addr)
 {
-	unsigned long start, end;
 	struct mm_struct *mm = vma->vm_mm;
+	struct mmu_notifier_range range;
 	struct mmu_gather tlb;
 
 	/* MADV_FREE works for only anon vma at the moment */
 	if (!vma_is_anonymous(vma))
 		return -EINVAL;
 
-	start = max(vma->vm_start, start_addr);
-	if (start >= vma->vm_end)
+	range.start = max(vma->vm_start, start_addr);
+	if (range.start >= vma->vm_end)
 		return -EINVAL;
-	end = min(vma->vm_end, end_addr);
-	if (end <= vma->vm_start)
+	range.end = min(vma->vm_end, end_addr);
+	if (range.end <= vma->vm_start)
 		return -EINVAL;
+	range.mm = mm;
 
 	lru_add_drain();
-	tlb_gather_mmu(&tlb, mm, start, end);
+	tlb_gather_mmu(&tlb, mm, range.start, range.end);
 	update_hiwater_rss(mm);
 
-	mmu_notifier_invalidate_range_start(mm, start, end);
-	madvise_free_page_range(&tlb, vma, start, end);
-	mmu_notifier_invalidate_range_end(mm, start, end);
-	tlb_finish_mmu(&tlb, start, end);
+	mmu_notifier_invalidate_range_start(&range);
+	madvise_free_page_range(&tlb, vma, range.start, range.end);
+	mmu_notifier_invalidate_range_end(&range);
+	tlb_finish_mmu(&tlb, range.start, range.end);
 
 	return 0;
 }
diff --git a/mm/memory.c b/mm/memory.c
index 4ad2d293ddc2..36e0b83949fc 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -973,8 +973,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	unsigned long next;
 	unsigned long addr = vma->vm_start;
 	unsigned long end = vma->vm_end;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
+	struct mmu_notifier_range range;
 	bool is_cow;
 	int ret;
 
@@ -1008,11 +1007,12 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	 * is_cow_mapping() returns true.
 	 */
 	is_cow = is_cow_mapping(vma->vm_flags);
-	mmun_start = addr;
-	mmun_end   = end;
+	range.start = addr;
+	range.end = end;
+	range.mm = src_mm;
+
 	if (is_cow)
-		mmu_notifier_invalidate_range_start(src_mm, mmun_start,
-						    mmun_end);
+		mmu_notifier_invalidate_range_start(&range);
 
 	ret = 0;
 	dst_pgd = pgd_offset(dst_mm, addr);
@@ -1029,7 +1029,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	} while (dst_pgd++, src_pgd++, addr = next, addr != end);
 
 	if (is_cow)
-		mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end);
+		mmu_notifier_invalidate_range_end(&range);
 	return ret;
 }
 
@@ -1332,12 +1332,16 @@ void unmap_vmas(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, unsigned long start_addr,
 		unsigned long end_addr)
 {
-	struct mm_struct *mm = vma->vm_mm;
+	struct mmu_notifier_range range;
 
-	mmu_notifier_invalidate_range_start(mm, start_addr, end_addr);
+	range.start = start_addr;
+	range.end = end_addr;
+	range.mm = vma->vm_mm;
+
+	mmu_notifier_invalidate_range_start(&range);
 	for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
 		unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
-	mmu_notifier_invalidate_range_end(mm, start_addr, end_addr);
+	mmu_notifier_invalidate_range_end(&range);
 }
 
 /**
@@ -1351,18 +1355,21 @@ void unmap_vmas(struct mmu_gather *tlb,
 void zap_page_range(struct vm_area_struct *vma, unsigned long start,
 		unsigned long size)
 {
-	struct mm_struct *mm = vma->vm_mm;
+	struct mmu_notifier_range range;
 	struct mmu_gather tlb;
-	unsigned long end = start + size;
+
+	range.start = start;
+	range.end = range.start + size;
+	range.mm = vma->vm_mm;
 
 	lru_add_drain();
-	tlb_gather_mmu(&tlb, mm, start, end);
-	update_hiwater_rss(mm);
-	mmu_notifier_invalidate_range_start(mm, start, end);
-	for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
-		unmap_single_vma(&tlb, vma, start, end, NULL);
-	mmu_notifier_invalidate_range_end(mm, start, end);
-	tlb_finish_mmu(&tlb, start, end);
+	tlb_gather_mmu(&tlb, range.mm, start, range.end);
+	update_hiwater_rss(range.mm);
+	mmu_notifier_invalidate_range_start(&range);
+	for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next)
+		unmap_single_vma(&tlb, vma, start, range.end, NULL);
+	mmu_notifier_invalidate_range_end(&range);
+	tlb_finish_mmu(&tlb, start, range.end);
 }
 
 /**
@@ -1377,17 +1384,20 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
 static void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
 		unsigned long size, struct zap_details *details)
 {
-	struct mm_struct *mm = vma->vm_mm;
+	struct mmu_notifier_range range;
 	struct mmu_gather tlb;
-	unsigned long end = address + size;
+
+	range.start = address;
+	range.end = range.start + size;
+	range.mm = vma->vm_mm;
 
 	lru_add_drain();
-	tlb_gather_mmu(&tlb, mm, address, end);
-	update_hiwater_rss(mm);
-	mmu_notifier_invalidate_range_start(mm, address, end);
-	unmap_single_vma(&tlb, vma, address, end, details);
-	mmu_notifier_invalidate_range_end(mm, address, end);
-	tlb_finish_mmu(&tlb, address, end);
+	tlb_gather_mmu(&tlb, range.mm, address, range.end);
+	update_hiwater_rss(range.mm);
+	mmu_notifier_invalidate_range_start(&range);
+	unmap_single_vma(&tlb, vma, address, range.end, details);
+	mmu_notifier_invalidate_range_end(&range);
+	tlb_finish_mmu(&tlb, address, range.end);
 }
 
 /**
@@ -2247,9 +2257,12 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 	struct page *new_page = NULL;
 	pte_t entry;
 	int page_copied = 0;
-	const unsigned long mmun_start = vmf->address & PAGE_MASK;
-	const unsigned long mmun_end = mmun_start + PAGE_SIZE;
 	struct mem_cgroup *memcg;
+	struct mmu_notifier_range range;
+
+	range.start = vmf->address & PAGE_MASK;
+	range.end = range.start + PAGE_SIZE;
+	range.mm = mm;
 
 	if (unlikely(anon_vma_prepare(vma)))
 		goto oom;
@@ -2272,7 +2285,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 
 	__SetPageUptodate(new_page);
 
-	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_start(&range);
 
 	/*
 	 * Re-check the pte - we dropped the lock
@@ -2349,7 +2362,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 	 * No need to double call mmu_notifier->invalidate_range() callback as
 	 * the above ptep_clear_flush_notify() did already call it.
 	 */
-	mmu_notifier_invalidate_range_only_end(mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_only_end(&range);
 	if (old_page) {
 		/*
 		 * Don't let another task, with possibly unlocked vma,
@@ -4030,7 +4043,7 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 #endif /* __PAGETABLE_PMD_FOLDED */
 
 static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
-			    unsigned long *start, unsigned long *end,
+			    struct mmu_notifier_range *range,
 			    pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
@@ -4058,10 +4071,10 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
 		if (!pmdpp)
 			goto out;
 
-		if (start && end) {
-			*start = address & PMD_MASK;
-			*end = *start + PMD_SIZE;
-			mmu_notifier_invalidate_range_start(mm, *start, *end);
+		if (range) {
+			range->start = address & PMD_MASK;
+			range->end = range->start + PMD_SIZE;
+			mmu_notifier_invalidate_range_start(range);
 		}
 		*ptlp = pmd_lock(mm, pmd);
 		if (pmd_huge(*pmd)) {
@@ -4069,17 +4082,17 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
 			return 0;
 		}
 		spin_unlock(*ptlp);
-		if (start && end)
-			mmu_notifier_invalidate_range_end(mm, *start, *end);
+		if (range)
+			mmu_notifier_invalidate_range_end(range);
 	}
 
 	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
-	if (start && end) {
-		*start = address & PAGE_MASK;
-		*end = *start + PAGE_SIZE;
-		mmu_notifier_invalidate_range_start(mm, *start, *end);
+	if (range) {
+		range->start = address & PAGE_MASK;
+		range->end = range->start + PAGE_SIZE;
+		mmu_notifier_invalidate_range_start(range);
 	}
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
 	if (!pte_present(*ptep))
@@ -4088,8 +4101,8 @@ static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
 	return 0;
 unlock:
 	pte_unmap_unlock(ptep, *ptlp);
-	if (start && end)
-		mmu_notifier_invalidate_range_end(mm, *start, *end);
+	if (range)
+		mmu_notifier_invalidate_range_end(range);
 out:
 	return -EINVAL;
 }
@@ -4101,20 +4114,20 @@ static inline int follow_pte(struct mm_struct *mm, unsigned long address,
 
 	/* (void) is needed to make gcc happy */
 	(void) __cond_lock(*ptlp,
-			   !(res = __follow_pte_pmd(mm, address, NULL, NULL,
+			   !(res = __follow_pte_pmd(mm, address, NULL,
 						    ptepp, NULL, ptlp)));
 	return res;
 }
 
 int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
-			     unsigned long *start, unsigned long *end,
+		 	     struct mmu_notifier_range *range,
 			     pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	int res;
 
 	/* (void) is needed to make gcc happy */
 	(void) __cond_lock(*ptlp,
-			   !(res = __follow_pte_pmd(mm, address, start, end,
+			   !(res = __follow_pte_pmd(mm, address, range,
 						    ptepp, pmdpp, ptlp)));
 	return res;
 }
diff --git a/mm/migrate.c b/mm/migrate.c
index f7e4bfdc13b7..4896dd9d8b28 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2303,8 +2303,13 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
  */
 static void migrate_vma_collect(struct migrate_vma *migrate)
 {
+	struct mmu_notifier_range range;
 	struct mm_walk mm_walk;
 
+	range.start = migrate->start;
+	range.end = migrate->end;
+	range.mm = mm_walk.mm;
+
 	mm_walk.pmd_entry = migrate_vma_collect_pmd;
 	mm_walk.pte_entry = NULL;
 	mm_walk.pte_hole = migrate_vma_collect_hole;
@@ -2314,13 +2319,9 @@ static void migrate_vma_collect(struct migrate_vma *migrate)
 	mm_walk.mm = migrate->vma->vm_mm;
 	mm_walk.private = migrate;
 
-	mmu_notifier_invalidate_range_start(mm_walk.mm,
-					    migrate->start,
-					    migrate->end);
+	mmu_notifier_invalidate_range_start(&range);
 	walk_page_range(migrate->start, migrate->end, &mm_walk);
-	mmu_notifier_invalidate_range_end(mm_walk.mm,
-					  migrate->start,
-					  migrate->end);
+	mmu_notifier_invalidate_range_end(&range);
 
 	migrate->end = migrate->start + (migrate->npages << PAGE_SHIFT);
 }
@@ -2703,7 +2704,8 @@ static void migrate_vma_pages(struct migrate_vma *migrate)
 	const unsigned long start = migrate->start;
 	struct vm_area_struct *vma = migrate->vma;
 	struct mm_struct *mm = vma->vm_mm;
-	unsigned long addr, i, mmu_start;
+	struct mmu_notifier_range range;
+	unsigned long addr, i;
 	bool notified = false;
 
 	for (i = 0, addr = start; i < npages; addr += PAGE_SIZE, i++) {
@@ -2722,11 +2724,12 @@ static void migrate_vma_pages(struct migrate_vma *migrate)
 				continue;
 			}
 			if (!notified) {
-				mmu_start = addr;
 				notified = true;
-				mmu_notifier_invalidate_range_start(mm,
-								mmu_start,
-								migrate->end);
+
+				range.start = addr;
+				range.end = migrate->end;
+				range.mm = mm;
+				mmu_notifier_invalidate_range_start(&range);
 			}
 			migrate_vma_insert_page(migrate, addr, newpage,
 						&migrate->src[i],
@@ -2767,8 +2770,7 @@ static void migrate_vma_pages(struct migrate_vma *migrate)
 	 * did already call it.
 	 */
 	if (notified)
-		mmu_notifier_invalidate_range_only_end(mm, mmu_start,
-						       migrate->end);
+		mmu_notifier_invalidate_range_only_end(&range);
 }
 
 /*
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 5f6665ae3ee2..4c52b3514c50 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -174,28 +174,20 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
 	srcu_read_unlock(&srcu, id);
 }
 
-int __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
-				  unsigned long start, unsigned long end,
-				  bool blockable)
+int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
 {
-	struct mmu_notifier_range _range, *range = &_range;
 	struct mmu_notifier *mn;
 	int ret = 0;
 	int id;
 
-	range->blockable = blockable;
-	range->start = start;
-	range->end = end;
-	range->mm = mm;
-
 	id = srcu_read_lock(&srcu);
-	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
+	hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
 		if (mn->ops->invalidate_range_start) {
 			int _ret = mn->ops->invalidate_range_start(mn, range);
 			if (_ret) {
 				pr_info("%pS callback failed with %d in %sblockable context.\n",
 						mn->ops->invalidate_range_start, _ret,
-						!blockable ? "non-" : "");
+						!range->blockable ? "non-" : "");
 				ret = _ret;
 			}
 		}
@@ -206,27 +198,14 @@ int __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
 }
 EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);
 
-void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
-					 unsigned long start,
-					 unsigned long end,
+void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
 					 bool only_end)
 {
-	struct mmu_notifier_range _range, *range = &_range;
 	struct mmu_notifier *mn;
 	int id;
 
-	/*
-	 * The end call back will never be call if the start refused to go
-	 * through because of blockable was false so here assume that we
-	 * can block.
-	 */
-	range->blockable = true;
-	range->start = start;
-	range->end = end;
-	range->mm = mm;
-
 	id = srcu_read_lock(&srcu);
-	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
+	hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
 		/*
 		 * Call invalidate_range here too to avoid the need for the
 		 * subsystem of having to register an invalidate_range_end
@@ -241,7 +220,9 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
 		 * already happen under page table lock.
 		 */
 		if (!only_end && mn->ops->invalidate_range)
-			mn->ops->invalidate_range(mn, mm, start, end);
+			mn->ops->invalidate_range(mn, range->mm,
+						  range->start,
+						  range->end);
 		if (mn->ops->invalidate_range_end)
 			mn->ops->invalidate_range_end(mn, range);
 	}
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 6d331620b9e5..f466adf31e12 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -171,7 +171,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
 	unsigned long next;
 	unsigned long pages = 0;
 	unsigned long nr_huge_updates = 0;
-	unsigned long mni_start = 0;
+	struct mmu_notifier_range range;
+
+	range.start = 0;
 
 	pmd = pmd_offset(pud, addr);
 	do {
@@ -183,9 +185,11 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
 			goto next;
 
 		/* invoke the mmu notifier if the pmd is populated */
-		if (!mni_start) {
-			mni_start = addr;
-			mmu_notifier_invalidate_range_start(mm, mni_start, end);
+		if (!range.start) {
+			range.start = addr;
+			range.end = end;
+			range.mm = mm;
+			mmu_notifier_invalidate_range_start(&range);
 		}
 
 		if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) {
@@ -214,8 +218,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
 		cond_resched();
 	} while (pmd++, addr = next, addr != end);
 
-	if (mni_start)
-		mmu_notifier_invalidate_range_end(mm, mni_start, end);
+	if (range.start)
+		mmu_notifier_invalidate_range_end(&range);
 
 	if (nr_huge_updates)
 		count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
diff --git a/mm/mremap.c b/mm/mremap.c
index 7f9f9180e401..db060acb4a8c 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -197,16 +197,17 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 		bool need_rmap_locks)
 {
 	unsigned long extent, next, old_end;
+	struct mmu_notifier_range range;
 	pmd_t *old_pmd, *new_pmd;
-	unsigned long mmun_start;	/* For mmu_notifiers */
-	unsigned long mmun_end;		/* For mmu_notifiers */
 
 	old_end = old_addr + len;
 	flush_cache_range(vma, old_addr, old_end);
 
-	mmun_start = old_addr;
-	mmun_end   = old_end;
-	mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
+	range.start = old_addr;
+	range.end = old_end;
+	range.mm = vma->vm_mm;
+
+	mmu_notifier_invalidate_range_start(&range);
 
 	for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
 		cond_resched();
@@ -247,7 +248,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			  new_pmd, new_addr, need_rmap_locks);
 	}
 
-	mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+	mmu_notifier_invalidate_range_end(&range);
 
 	return len + old_addr - old_end;	/* how much done */
 }
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 6589f60d5018..b29ab2624e95 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -516,19 +516,22 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
 		 * count elevated without a good reason.
 		 */
 		if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
-			const unsigned long start = vma->vm_start;
-			const unsigned long end = vma->vm_end;
+			struct mmu_notifier_range range;
 			struct mmu_gather tlb;
 
-			tlb_gather_mmu(&tlb, mm, start, end);
-			if (mmu_notifier_invalidate_range_start_nonblock(mm, start, end)) {
-				tlb_finish_mmu(&tlb, start, end);
+			range.start = vma->vm_start;
+			range.end = vma->vm_end;
+			range.mm = mm;
+
+			tlb_gather_mmu(&tlb, mm, range.start, range.end);
+			if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
+				tlb_finish_mmu(&tlb, range.start, range.end);
 				ret = false;
 				continue;
 			}
-			unmap_page_range(&tlb, vma, start, end, NULL);
-			mmu_notifier_invalidate_range_end(mm, start, end);
-			tlb_finish_mmu(&tlb, start, end);
+			unmap_page_range(&tlb, vma, range.start, range.end, NULL);
+			mmu_notifier_invalidate_range_end(&range);
+			tlb_finish_mmu(&tlb, range.start, range.end);
 		}
 	}
 
diff --git a/mm/rmap.c b/mm/rmap.c
index 85b7f9423352..09c5d9e5c766 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -889,15 +889,18 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 		.address = address,
 		.flags = PVMW_SYNC,
 	};
-	unsigned long start = address, end;
+	struct mmu_notifier_range range;
 	int *cleaned = arg;
 
 	/*
 	 * We have to assume the worse case ie pmd for invalidation. Note that
 	 * the page can not be free from this function.
 	 */
-	end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
-	mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
+	range.mm = vma->vm_mm;
+	range.start = address;
+	range.end = min(vma->vm_end, range.start +
+			(PAGE_SIZE << compound_order(page)));
+	mmu_notifier_invalidate_range_start(&range);
 
 	while (page_vma_mapped_walk(&pvmw)) {
 		unsigned long cstart;
@@ -949,7 +952,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			(*cleaned)++;
 	}
 
-	mmu_notifier_invalidate_range_end(vma->vm_mm, start, end);
+	mmu_notifier_invalidate_range_end(&range);
 
 	return true;
 }
@@ -1345,7 +1348,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	pte_t pteval;
 	struct page *subpage;
 	bool ret = true;
-	unsigned long start = address, end;
+	struct mmu_notifier_range range;
 	enum ttu_flags flags = (enum ttu_flags)arg;
 
 	/* munlock has nothing to gain from examining un-locked vmas */
@@ -1369,15 +1372,19 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	 * Note that the page can not be free in this function as call of
 	 * try_to_unmap() must hold a reference on the page.
 	 */
-	end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
+	range.mm = vma->vm_mm;
+	range.start = vma->vm_start;
+	range.end = min(vma->vm_end, range.start +
+			(PAGE_SIZE << compound_order(page)));
 	if (PageHuge(page)) {
 		/*
 		 * If sharing is possible, start and end will be adjusted
 		 * accordingly.
 		 */
-		adjust_range_if_pmd_sharing_possible(vma, &start, &end);
+		adjust_range_if_pmd_sharing_possible(vma, &range.start,
+						     &range.end);
 	}
-	mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
+	mmu_notifier_invalidate_range_start(&range);
 
 	while (page_vma_mapped_walk(&pvmw)) {
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
@@ -1428,9 +1435,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				 * we must flush them all.  start/end were
 				 * already adjusted above to cover this range.
 				 */
-				flush_cache_range(vma, start, end);
-				flush_tlb_range(vma, start, end);
-				mmu_notifier_invalidate_range(mm, start, end);
+				flush_cache_range(vma, range.start, range.end);
+				flush_tlb_range(vma, range.start, range.end);
+				mmu_notifier_invalidate_range(mm, range.start,
+							      range.end);
 
 				/*
 				 * The ref count of the PMD page was dropped
@@ -1650,7 +1658,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		put_page(page);
 	}
 
-	mmu_notifier_invalidate_range_end(vma->vm_mm, start, end);
+	mmu_notifier_invalidate_range_end(&range);
 
 	return ret;
 }
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-03 20:18 [PATCH 0/3] mmu notifier contextual informations jglisse
  2018-12-03 20:18 ` [PATCH 1/3] mm/mmu_notifier: use structure for invalidate_range_start/end callback jglisse
  2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
@ 2018-12-03 20:18 ` jglisse
  2018-12-04  8:17   ` Mike Rapoport
                     ` (3 more replies)
  2018-12-04  7:35 ` [PATCH 0/3] mmu notifier contextual informations Koenig, Christian
  3 siblings, 4 replies; 17+ messages in thread
From: jglisse @ 2018-12-03 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, linux-kernel, Jérôme Glisse,
	Matthew Wilcox, Ross Zwisler, Jan Kara, Dan Williams,
	Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

From: Jérôme Glisse <jglisse@redhat.com>

CPU page table update can happens for many reasons, not only as a result
of a syscall (munmap(), mprotect(), mremap(), madvise(), ...) but also
as a result of kernel activities (memory compression, reclaim, migration,
...).

Users of mmu notifier API track changes to the CPU page table and take
specific action for them. While current API only provide range of virtual
address affected by the change, not why the changes is happening.

This patchset adds event information so that users of mmu notifier can
differentiate among broad category:
    - UNMAP: munmap() or mremap()
    - CLEAR: page table is cleared (migration, compaction, reclaim, ...)
    - PROTECTION_VMA: change in access protections for the range
    - PROTECTION_PAGE: change in access protections for page in the range
    - SOFT_DIRTY: soft dirtyness tracking

Being able to identify munmap() and mremap() from other reasons why the
page table is cleared is important to allow user of mmu notifier to
update their own internal tracking structure accordingly (on munmap or
mremap it is not longer needed to track range of virtual address as it
becomes invalid).

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <zwisler@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: kvm@vger.kernel.org
Cc: linux-rdma@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
---
 fs/dax.c                     |  1 +
 fs/proc/task_mmu.c           |  1 +
 include/linux/mmu_notifier.h | 33 +++++++++++++++++++++++++++++++++
 kernel/events/uprobes.c      |  1 +
 mm/huge_memory.c             |  4 ++++
 mm/hugetlb.c                 |  4 ++++
 mm/khugepaged.c              |  1 +
 mm/ksm.c                     |  2 ++
 mm/madvise.c                 |  1 +
 mm/memory.c                  |  5 +++++
 mm/migrate.c                 |  2 ++
 mm/mprotect.c                |  1 +
 mm/mremap.c                  |  1 +
 mm/oom_kill.c                |  1 +
 mm/rmap.c                    |  2 ++
 15 files changed, 60 insertions(+)

diff --git a/fs/dax.c b/fs/dax.c
index e22508ee19ec..83092c5ac5f0 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -761,6 +761,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
 		struct mmu_notifier_range range;
 		unsigned long address;
 
+		range.event = MMU_NOTIFY_PROTECTION_PAGE;
 		range.mm = vma->vm_mm;
 
 		cond_resched();
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 53d625925669..4abb1668eeb3 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1144,6 +1144,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 			range.start = 0;
 			range.end = -1UL;
 			range.mm = mm;
+			range.event = MMU_NOTIFY_SOFT_DIRTY;
 			mmu_notifier_invalidate_range_start(&range);
 		}
 		walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index cbeece8e47d4..3077d487be8b 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -25,10 +25,43 @@ struct mmu_notifier_mm {
 	spinlock_t lock;
 };
 
+/*
+ * What event is triggering the invalidation:
+ *
+ * MMU_NOTIFY_UNMAP
+ *    either munmap() that unmap the range or a mremap() that move the range
+ *
+ * MMU_NOTIFY_CLEAR
+ *    clear page table entry (many reasons for this like madvise() or replacing
+ *    a page by another one, ...).
+ *
+ * MMU_NOTIFY_PROTECTION_VMA
+ *    update is due to protection change for the range ie using the vma access
+ *    permission (vm_page_prot) to update the whole range is enough no need to
+ *    inspect changes to the CPU page table (mprotect() syscall)
+ *
+ * MMU_NOTIFY_PROTECTION_PAGE
+ *    update is due to change in read/write flag for pages in the range so to
+ *    mirror those changes the user must inspect the CPU page table (from the
+ *    end callback).
+ *
+ *
+ * MMU_NOTIFY_SOFT_DIRTY
+ *    soft dirty accounting (still same page and same access flags)
+ */
+enum mmu_notifier_event {
+	MMU_NOTIFY_UNMAP = 0,
+	MMU_NOTIFY_CLEAR,
+	MMU_NOTIFY_PROTECTION_VMA,
+	MMU_NOTIFY_PROTECTION_PAGE,
+	MMU_NOTIFY_SOFT_DIRTY,
+};
+
 struct mmu_notifier_range {
 	struct mm_struct *mm;
 	unsigned long start;
 	unsigned long end;
+	enum mmu_notifier_event event;
 	bool blockable;
 };
 
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index aa7996ca361e..b6ef3be1c24e 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -174,6 +174,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 	struct mmu_notifier_range range;
 	struct mem_cgroup *memcg;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = addr;
 	range.end = addr + PAGE_SIZE;
 	range.mm = mm;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1a7a059dbf7d..4919be71ffd0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1182,6 +1182,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
 		cond_resched();
 	}
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = haddr;
 	range.end = range.start + HPAGE_PMD_SIZE;
 	range.mm = vma->vm_mm;
@@ -1347,6 +1348,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 				    vma, HPAGE_PMD_NR);
 	__SetPageUptodate(new_page);
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = haddr;
 	range.end = range.start + HPAGE_PMD_SIZE;
 	range.mm = vma->vm_mm;
@@ -2029,6 +2031,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
 	struct mm_struct *mm = vma->vm_mm;
 	struct mmu_notifier_range range;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = address & HPAGE_PUD_MASK;
 	range.end = range.start + HPAGE_PUD_SIZE;
 	range.mm = mm;
@@ -2248,6 +2251,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	struct mm_struct *mm = vma->vm_mm;
 	struct mmu_notifier_range range;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = address & HPAGE_PMD_MASK;
 	range.end = range.start + HPAGE_PMD_SIZE;
 	range.mm = mm;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4bfbdab44d51..9ffe34173834 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3244,6 +3244,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 
 	cow = (vma->vm_flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = vma->vm_start;
 	range.end = vma->vm_end;
 	range.mm = src;
@@ -3344,6 +3345,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	unsigned long sz = huge_page_size(h);
 	struct mmu_notifier_range range;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = start;
 	range.end = end;
 	range.mm = mm;
@@ -3629,6 +3631,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	__SetPageUptodate(new_page);
 	set_page_huge_active(new_page);
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = haddr;
 	range.end = range.start + huge_page_size(h);
 	range.mm = mm;
@@ -4346,6 +4349,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 	bool shared_pmd = false;
 	struct mmu_notifier_range range;
 
+	range.event = MMU_NOTIFY_PROTECTION_VMA;
 	range.start = start;
 	range.end = end;
 	range.mm = mm;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index e9fe0c9a9f56..c5c78ba30b38 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1016,6 +1016,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	pte = pte_offset_map(pmd, address);
 	pte_ptl = pte_lockptr(mm, pmd);
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = address;
 	range.end = range.start + HPAGE_PMD_SIZE;
 	range.mm = mm;
diff --git a/mm/ksm.c b/mm/ksm.c
index 262694d0cd4c..f8fbb92ca1bd 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1050,6 +1050,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 
 	BUG_ON(PageTransCompound(page));
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = pvmw.address;
 	range.end = range.start + PAGE_SIZE;
 	range.mm = mm;
@@ -1139,6 +1140,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 	if (!pmd)
 		goto out;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = addr;
 	range.end = addr + PAGE_SIZE;
 	range.mm = mm;
diff --git a/mm/madvise.c b/mm/madvise.c
index f20dd80ca21b..c415985d6a04 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -466,6 +466,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
 	if (!vma_is_anonymous(vma))
 		return -EINVAL;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = max(vma->vm_start, start_addr);
 	if (range.start >= vma->vm_end)
 		return -EINVAL;
diff --git a/mm/memory.c b/mm/memory.c
index 36e0b83949fc..4ad63002d770 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1007,6 +1007,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	 * is_cow_mapping() returns true.
 	 */
 	is_cow = is_cow_mapping(vma->vm_flags);
+	range.event = MMU_NOTIFY_PROTECTION_PAGE;
 	range.start = addr;
 	range.end = end;
 	range.mm = src_mm;
@@ -1334,6 +1335,7 @@ void unmap_vmas(struct mmu_gather *tlb,
 {
 	struct mmu_notifier_range range;
 
+	range.event = MMU_NOTIFY_UNMAP;
 	range.start = start_addr;
 	range.end = end_addr;
 	range.mm = vma->vm_mm;
@@ -1358,6 +1360,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
 	struct mmu_notifier_range range;
 	struct mmu_gather tlb;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = start;
 	range.end = range.start + size;
 	range.mm = vma->vm_mm;
@@ -1387,6 +1390,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
 	struct mmu_notifier_range range;
 	struct mmu_gather tlb;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = address;
 	range.end = range.start + size;
 	range.mm = vma->vm_mm;
@@ -2260,6 +2264,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 	struct mem_cgroup *memcg;
 	struct mmu_notifier_range range;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = vmf->address & PAGE_MASK;
 	range.end = range.start + PAGE_SIZE;
 	range.mm = mm;
diff --git a/mm/migrate.c b/mm/migrate.c
index 4896dd9d8b28..a2caaabfc5a1 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2306,6 +2306,7 @@ static void migrate_vma_collect(struct migrate_vma *migrate)
 	struct mmu_notifier_range range;
 	struct mm_walk mm_walk;
 
+	range.event = MMU_NOTIFY_CLEAR;
 	range.start = migrate->start;
 	range.end = migrate->end;
 	range.mm = mm_walk.mm;
@@ -2726,6 +2727,7 @@ static void migrate_vma_pages(struct migrate_vma *migrate)
 			if (!notified) {
 				notified = true;
 
+				range.event = MMU_NOTIFY_CLEAR;
 				range.start = addr;
 				range.end = migrate->end;
 				range.mm = mm;
diff --git a/mm/mprotect.c b/mm/mprotect.c
index f466adf31e12..6d41321b2f3e 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -186,6 +186,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
 
 		/* invoke the mmu notifier if the pmd is populated */
 		if (!range.start) {
+			range.event = MMU_NOTIFY_PROTECTION_VMA;
 			range.start = addr;
 			range.end = end;
 			range.mm = mm;
diff --git a/mm/mremap.c b/mm/mremap.c
index db060acb4a8c..856a5e6bb226 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -203,6 +203,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 	old_end = old_addr + len;
 	flush_cache_range(vma, old_addr, old_end);
 
+	range.event = MMU_NOTIFY_UNMAP;
 	range.start = old_addr;
 	range.end = old_end;
 	range.mm = vma->vm_mm;
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index b29ab2624e95..f4bde1c34714 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -519,6 +519,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
 			struct mmu_notifier_range range;
 			struct mmu_gather tlb;
 
+			range.event = MMU_NOTIFY_CLEAR;
 			range.start = vma->vm_start;
 			range.end = vma->vm_end;
 			range.mm = mm;
diff --git a/mm/rmap.c b/mm/rmap.c
index 09c5d9e5c766..b1afbbcc236a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -896,6 +896,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 	 * We have to assume the worse case ie pmd for invalidation. Note that
 	 * the page can not be free from this function.
 	 */
+	range.event = MMU_NOTIFY_PROTECTION_PAGE;
 	range.mm = vma->vm_mm;
 	range.start = address;
 	range.end = min(vma->vm_end, range.start +
@@ -1372,6 +1373,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	 * Note that the page can not be free in this function as call of
 	 * try_to_unmap() must hold a reference on the page.
 	 */
+	range.event = MMU_NOTIFY_CLEAR;
 	range.mm = vma->vm_mm;
 	range.start = vma->vm_start;
 	range.end = min(vma->vm_end, range.start +
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
@ 2018-12-04  0:09   ` Jerome Glisse
  2018-12-05 11:04   ` Jan Kara
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: Jerome Glisse @ 2018-12-04  0:09 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: linux-kernel, Matthew Wilcox, Ross Zwisler, Jan Kara,
	Dan Williams, Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

On Mon, Dec 03, 2018 at 03:18:16PM -0500, jglisse@redhat.com wrote:
> diff --git a/mm/migrate.c b/mm/migrate.c
> index f7e4bfdc13b7..4896dd9d8b28 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2303,8 +2303,13 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   */
>  static void migrate_vma_collect(struct migrate_vma *migrate)
>  {
> +	struct mmu_notifier_range range;
>  	struct mm_walk mm_walk;
>  
> +	range.start = migrate->start;
> +	range.end = migrate->end;
> +	range.mm = mm_walk.mm;

Andrew can you replace above line by:

+	range.mm = migrate->vma->vm_mm;

I made a mistake here when i was rebasing before posting. I checked
the patchset again and i believe this is the only mistake i made.

Do you want me to repost ?

Sorry for my stupid mistake.

Cheers,
Jérôme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] mmu notifier contextual informations
  2018-12-03 20:18 [PATCH 0/3] mmu notifier contextual informations jglisse
                   ` (2 preceding siblings ...)
  2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
@ 2018-12-04  7:35 ` Koenig, Christian
  3 siblings, 0 replies; 17+ messages in thread
From: Koenig, Christian @ 2018-12-04  7:35 UTC (permalink / raw)
  To: jglisse, linux-mm
  Cc: Andrew Morton, linux-kernel, Matthew Wilcox, Ross Zwisler,
	Jan Kara, Dan Williams, Paolo Bonzini,
	Radim Krčmář,
	Michal Hocko, Kuehling, Felix, Ralph Campbell, John Hubbard, kvm,
	linux-rdma, linux-fsdevel, dri-devel

Am 03.12.18 um 21:18 schrieb jglisse@redhat.com:
> From: Jérôme Glisse <jglisse@redhat.com>
>
> This patchset add contextual information, why an invalidation is
> happening, to mmu notifier callback. This is necessary for user
> of mmu notifier that wish to maintains their own data structure
> without having to add new fields to struct vm_area_struct (vma).
>
> For instance device can have they own page table that mirror the
> process address space. When a vma is unmap (munmap() syscall) the
> device driver can free the device page table for the range.
>
> Today we do not have any information on why a mmu notifier call
> back is happening and thus device driver have to assume that it
> is always an munmap(). This is inefficient at it means that it
> needs to re-allocate device page table on next page fault and
> rebuild the whole device driver data structure for the range.
>
> Other use case beside munmap() also exist, for instance it is
> pointless for device driver to invalidate the device page table
> when the invalidation is for the soft dirtyness tracking. Or
> device driver can optimize away mprotect() that change the page
> table permission access for the range.
>
> This patchset enable all this optimizations for device driver.
> I do not include any of those in this serie but other patchset
> i am posting will leverage this.
>
>
>  From code point of view the patchset is pretty simple, the first
> two patches consolidate all mmu notifier arguments into a struct
> so that it is easier to add/change arguments. The last patch adds
> the contextual information (munmap, protection, soft dirty, clear,
> ...).

Skimming over it at least the parts I'm familiar with look completely 
sane to me.

Whole series is Acked-by: Christian König <christian.koenig@amd.com>.

Regards,
Christian.

>
> Cheers,
> Jérôme
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox <mawilcox@microsoft.com>
> Cc: Ross Zwisler <zwisler@kernel.org>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Cc: Felix Kuehling <felix.kuehling@amd.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
>
> Jérôme Glisse (3):
>    mm/mmu_notifier: use structure for invalidate_range_start/end callback
>    mm/mmu_notifier: use structure for invalidate_range_start/end calls
>    mm/mmu_notifier: contextual information for event triggering
>      invalidation
>
>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c  |  43 ++++-----
>   drivers/gpu/drm/i915/i915_gem_userptr.c |  14 ++-
>   drivers/gpu/drm/radeon/radeon_mn.c      |  16 ++--
>   drivers/infiniband/core/umem_odp.c      |  20 ++---
>   drivers/infiniband/hw/hfi1/mmu_rb.c     |  13 ++-
>   drivers/misc/mic/scif/scif_dma.c        |  11 +--
>   drivers/misc/sgi-gru/grutlbpurge.c      |  14 ++-
>   drivers/xen/gntdev.c                    |  12 +--
>   fs/dax.c                                |  11 ++-
>   fs/proc/task_mmu.c                      |  10 ++-
>   include/linux/mm.h                      |   4 +-
>   include/linux/mmu_notifier.h            | 106 +++++++++++++++-------
>   kernel/events/uprobes.c                 |  13 +--
>   mm/hmm.c                                |  23 ++---
>   mm/huge_memory.c                        |  58 ++++++------
>   mm/hugetlb.c                            |  63 +++++++------
>   mm/khugepaged.c                         |  13 +--
>   mm/ksm.c                                |  26 +++---
>   mm/madvise.c                            |  22 ++---
>   mm/memory.c                             | 112 ++++++++++++++----------
>   mm/migrate.c                            |  30 ++++---
>   mm/mmu_notifier.c                       |  22 +++--
>   mm/mprotect.c                           |  17 ++--
>   mm/mremap.c                             |  14 +--
>   mm/oom_kill.c                           |  20 +++--
>   mm/rmap.c                               |  34 ++++---
>   virt/kvm/kvm_main.c                     |  14 ++-
>   27 files changed, 421 insertions(+), 334 deletions(-)
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
@ 2018-12-04  8:17   ` Mike Rapoport
  2018-12-04 14:48     ` Jerome Glisse
  2018-12-04 23:21   ` Andrew Morton
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Mike Rapoport @ 2018-12-04  8:17 UTC (permalink / raw)
  To: jglisse
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Ross Zwisler, Jan Kara, Dan Williams, Paolo Bonzini,
	Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

On Mon, Dec 03, 2018 at 03:18:17PM -0500, jglisse@redhat.com wrote:
> From: Jérôme Glisse <jglisse@redhat.com>
> 
> CPU page table update can happens for many reasons, not only as a result
> of a syscall (munmap(), mprotect(), mremap(), madvise(), ...) but also
> as a result of kernel activities (memory compression, reclaim, migration,
> ...).
> 
> Users of mmu notifier API track changes to the CPU page table and take
> specific action for them. While current API only provide range of virtual
> address affected by the change, not why the changes is happening.
> 
> This patchset adds event information so that users of mmu notifier can
> differentiate among broad category:
>     - UNMAP: munmap() or mremap()
>     - CLEAR: page table is cleared (migration, compaction, reclaim, ...)
>     - PROTECTION_VMA: change in access protections for the range
>     - PROTECTION_PAGE: change in access protections for page in the range
>     - SOFT_DIRTY: soft dirtyness tracking
> 
> Being able to identify munmap() and mremap() from other reasons why the
> page table is cleared is important to allow user of mmu notifier to
> update their own internal tracking structure accordingly (on munmap or
> mremap it is not longer needed to track range of virtual address as it
> becomes invalid).
> 
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox <mawilcox@microsoft.com>
> Cc: Ross Zwisler <zwisler@kernel.org>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Cc: Felix Kuehling <felix.kuehling@amd.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> ---
>  fs/dax.c                     |  1 +
>  fs/proc/task_mmu.c           |  1 +
>  include/linux/mmu_notifier.h | 33 +++++++++++++++++++++++++++++++++
>  kernel/events/uprobes.c      |  1 +
>  mm/huge_memory.c             |  4 ++++
>  mm/hugetlb.c                 |  4 ++++
>  mm/khugepaged.c              |  1 +
>  mm/ksm.c                     |  2 ++
>  mm/madvise.c                 |  1 +
>  mm/memory.c                  |  5 +++++
>  mm/migrate.c                 |  2 ++
>  mm/mprotect.c                |  1 +
>  mm/mremap.c                  |  1 +
>  mm/oom_kill.c                |  1 +
>  mm/rmap.c                    |  2 ++
>  15 files changed, 60 insertions(+)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index e22508ee19ec..83092c5ac5f0 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -761,6 +761,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
>  		struct mmu_notifier_range range;
>  		unsigned long address;
> 
> +		range.event = MMU_NOTIFY_PROTECTION_PAGE;
>  		range.mm = vma->vm_mm;
> 
>  		cond_resched();
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 53d625925669..4abb1668eeb3 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1144,6 +1144,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>  			range.start = 0;
>  			range.end = -1UL;
>  			range.mm = mm;
> +			range.event = MMU_NOTIFY_SOFT_DIRTY;
>  			mmu_notifier_invalidate_range_start(&range);
>  		}
>  		walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index cbeece8e47d4..3077d487be8b 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -25,10 +25,43 @@ struct mmu_notifier_mm {
>  	spinlock_t lock;
>  };
> 
> +/*
> + * What event is triggering the invalidation:

Can you please make it kernel-doc comment?

> + *
> + * MMU_NOTIFY_UNMAP
> + *    either munmap() that unmap the range or a mremap() that move the range
> + *
> + * MMU_NOTIFY_CLEAR
> + *    clear page table entry (many reasons for this like madvise() or replacing
> + *    a page by another one, ...).
> + *
> + * MMU_NOTIFY_PROTECTION_VMA
> + *    update is due to protection change for the range ie using the vma access
> + *    permission (vm_page_prot) to update the whole range is enough no need to
> + *    inspect changes to the CPU page table (mprotect() syscall)
> + *
> + * MMU_NOTIFY_PROTECTION_PAGE
> + *    update is due to change in read/write flag for pages in the range so to
> + *    mirror those changes the user must inspect the CPU page table (from the
> + *    end callback).
> + *
> + *
> + * MMU_NOTIFY_SOFT_DIRTY
> + *    soft dirty accounting (still same page and same access flags)
> + */
> +enum mmu_notifier_event {
> +	MMU_NOTIFY_UNMAP = 0,
> +	MMU_NOTIFY_CLEAR,
> +	MMU_NOTIFY_PROTECTION_VMA,
> +	MMU_NOTIFY_PROTECTION_PAGE,
> +	MMU_NOTIFY_SOFT_DIRTY,
> +};
> +
>  struct mmu_notifier_range {
>  	struct mm_struct *mm;
>  	unsigned long start;
>  	unsigned long end;
> +	enum mmu_notifier_event event;
>  	bool blockable;
>  };
> 
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index aa7996ca361e..b6ef3be1c24e 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -174,6 +174,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
>  	struct mmu_notifier_range range;
>  	struct mem_cgroup *memcg;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = addr;
>  	range.end = addr + PAGE_SIZE;
>  	range.mm = mm;
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 1a7a059dbf7d..4919be71ffd0 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1182,6 +1182,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
>  		cond_resched();
>  	}
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = haddr;
>  	range.end = range.start + HPAGE_PMD_SIZE;
>  	range.mm = vma->vm_mm;
> @@ -1347,6 +1348,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
>  				    vma, HPAGE_PMD_NR);
>  	__SetPageUptodate(new_page);
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = haddr;
>  	range.end = range.start + HPAGE_PMD_SIZE;
>  	range.mm = vma->vm_mm;
> @@ -2029,6 +2031,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct mmu_notifier_range range;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = address & HPAGE_PUD_MASK;
>  	range.end = range.start + HPAGE_PUD_SIZE;
>  	range.mm = mm;
> @@ -2248,6 +2251,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct mmu_notifier_range range;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = address & HPAGE_PMD_MASK;
>  	range.end = range.start + HPAGE_PMD_SIZE;
>  	range.mm = mm;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4bfbdab44d51..9ffe34173834 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3244,6 +3244,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> 
>  	cow = (vma->vm_flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = vma->vm_start;
>  	range.end = vma->vm_end;
>  	range.mm = src;
> @@ -3344,6 +3345,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  	unsigned long sz = huge_page_size(h);
>  	struct mmu_notifier_range range;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = start;
>  	range.end = end;
>  	range.mm = mm;
> @@ -3629,6 +3631,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
>  	__SetPageUptodate(new_page);
>  	set_page_huge_active(new_page);
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = haddr;
>  	range.end = range.start + huge_page_size(h);
>  	range.mm = mm;
> @@ -4346,6 +4349,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>  	bool shared_pmd = false;
>  	struct mmu_notifier_range range;
> 
> +	range.event = MMU_NOTIFY_PROTECTION_VMA;
>  	range.start = start;
>  	range.end = end;
>  	range.mm = mm;
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index e9fe0c9a9f56..c5c78ba30b38 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1016,6 +1016,7 @@ static void collapse_huge_page(struct mm_struct *mm,
>  	pte = pte_offset_map(pmd, address);
>  	pte_ptl = pte_lockptr(mm, pmd);
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = address;
>  	range.end = range.start + HPAGE_PMD_SIZE;
>  	range.mm = mm;
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 262694d0cd4c..f8fbb92ca1bd 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1050,6 +1050,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> 
>  	BUG_ON(PageTransCompound(page));
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = pvmw.address;
>  	range.end = range.start + PAGE_SIZE;
>  	range.mm = mm;
> @@ -1139,6 +1140,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
>  	if (!pmd)
>  		goto out;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = addr;
>  	range.end = addr + PAGE_SIZE;
>  	range.mm = mm;
> diff --git a/mm/madvise.c b/mm/madvise.c
> index f20dd80ca21b..c415985d6a04 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -466,6 +466,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>  	if (!vma_is_anonymous(vma))
>  		return -EINVAL;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = max(vma->vm_start, start_addr);
>  	if (range.start >= vma->vm_end)
>  		return -EINVAL;
> diff --git a/mm/memory.c b/mm/memory.c
> index 36e0b83949fc..4ad63002d770 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1007,6 +1007,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  	 * is_cow_mapping() returns true.
>  	 */
>  	is_cow = is_cow_mapping(vma->vm_flags);
> +	range.event = MMU_NOTIFY_PROTECTION_PAGE;
>  	range.start = addr;
>  	range.end = end;
>  	range.mm = src_mm;
> @@ -1334,6 +1335,7 @@ void unmap_vmas(struct mmu_gather *tlb,
>  {
>  	struct mmu_notifier_range range;
> 
> +	range.event = MMU_NOTIFY_UNMAP;
>  	range.start = start_addr;
>  	range.end = end_addr;
>  	range.mm = vma->vm_mm;
> @@ -1358,6 +1360,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
>  	struct mmu_notifier_range range;
>  	struct mmu_gather tlb;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = start;
>  	range.end = range.start + size;
>  	range.mm = vma->vm_mm;
> @@ -1387,6 +1390,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
>  	struct mmu_notifier_range range;
>  	struct mmu_gather tlb;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = address;
>  	range.end = range.start + size;
>  	range.mm = vma->vm_mm;
> @@ -2260,6 +2264,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
>  	struct mem_cgroup *memcg;
>  	struct mmu_notifier_range range;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = vmf->address & PAGE_MASK;
>  	range.end = range.start + PAGE_SIZE;
>  	range.mm = mm;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 4896dd9d8b28..a2caaabfc5a1 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2306,6 +2306,7 @@ static void migrate_vma_collect(struct migrate_vma *migrate)
>  	struct mmu_notifier_range range;
>  	struct mm_walk mm_walk;
> 
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.start = migrate->start;
>  	range.end = migrate->end;
>  	range.mm = mm_walk.mm;
> @@ -2726,6 +2727,7 @@ static void migrate_vma_pages(struct migrate_vma *migrate)
>  			if (!notified) {
>  				notified = true;
> 
> +				range.event = MMU_NOTIFY_CLEAR;
>  				range.start = addr;
>  				range.end = migrate->end;
>  				range.mm = mm;
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index f466adf31e12..6d41321b2f3e 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -186,6 +186,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> 
>  		/* invoke the mmu notifier if the pmd is populated */
>  		if (!range.start) {
> +			range.event = MMU_NOTIFY_PROTECTION_VMA;
>  			range.start = addr;
>  			range.end = end;
>  			range.mm = mm;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index db060acb4a8c..856a5e6bb226 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -203,6 +203,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>  	old_end = old_addr + len;
>  	flush_cache_range(vma, old_addr, old_end);
> 
> +	range.event = MMU_NOTIFY_UNMAP;
>  	range.start = old_addr;
>  	range.end = old_end;
>  	range.mm = vma->vm_mm;
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index b29ab2624e95..f4bde1c34714 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -519,6 +519,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
>  			struct mmu_notifier_range range;
>  			struct mmu_gather tlb;
> 
> +			range.event = MMU_NOTIFY_CLEAR;
>  			range.start = vma->vm_start;
>  			range.end = vma->vm_end;
>  			range.mm = mm;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 09c5d9e5c766..b1afbbcc236a 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -896,6 +896,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
>  	 * We have to assume the worse case ie pmd for invalidation. Note that
>  	 * the page can not be free from this function.
>  	 */
> +	range.event = MMU_NOTIFY_PROTECTION_PAGE;
>  	range.mm = vma->vm_mm;
>  	range.start = address;
>  	range.end = min(vma->vm_end, range.start +
> @@ -1372,6 +1373,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  	 * Note that the page can not be free in this function as call of
>  	 * try_to_unmap() must hold a reference on the page.
>  	 */
> +	range.event = MMU_NOTIFY_CLEAR;
>  	range.mm = vma->vm_mm;
>  	range.start = vma->vm_start;
>  	range.end = min(vma->vm_end, range.start +
> -- 
> 2.17.2
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-04  8:17   ` Mike Rapoport
@ 2018-12-04 14:48     ` Jerome Glisse
  0 siblings, 0 replies; 17+ messages in thread
From: Jerome Glisse @ 2018-12-04 14:48 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Ross Zwisler, Jan Kara, Dan Williams, Paolo Bonzini,
	Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

On Tue, Dec 04, 2018 at 10:17:48AM +0200, Mike Rapoport wrote:
> On Mon, Dec 03, 2018 at 03:18:17PM -0500, jglisse@redhat.com wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>

[...]

> > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > index cbeece8e47d4..3077d487be8b 100644
> > --- a/include/linux/mmu_notifier.h
> > +++ b/include/linux/mmu_notifier.h
> > @@ -25,10 +25,43 @@ struct mmu_notifier_mm {
> >  	spinlock_t lock;
> >  };
> > 
> > +/*
> > + * What event is triggering the invalidation:
> 
> Can you please make it kernel-doc comment?

Sorry should have done that in the first place, Andrew i will post a v2
with that and fixing my one stupid bug.



> > + *
> > + * MMU_NOTIFY_UNMAP
> > + *    either munmap() that unmap the range or a mremap() that move the range
> > + *
> > + * MMU_NOTIFY_CLEAR
> > + *    clear page table entry (many reasons for this like madvise() or replacing
> > + *    a page by another one, ...).
> > + *
> > + * MMU_NOTIFY_PROTECTION_VMA
> > + *    update is due to protection change for the range ie using the vma access
> > + *    permission (vm_page_prot) to update the whole range is enough no need to
> > + *    inspect changes to the CPU page table (mprotect() syscall)
> > + *
> > + * MMU_NOTIFY_PROTECTION_PAGE
> > + *    update is due to change in read/write flag for pages in the range so to
> > + *    mirror those changes the user must inspect the CPU page table (from the
> > + *    end callback).
> > + *
> > + *
> > + * MMU_NOTIFY_SOFT_DIRTY
> > + *    soft dirty accounting (still same page and same access flags)
> > + */
> > +enum mmu_notifier_event {
> > +	MMU_NOTIFY_UNMAP = 0,
> > +	MMU_NOTIFY_CLEAR,
> > +	MMU_NOTIFY_PROTECTION_VMA,
> > +	MMU_NOTIFY_PROTECTION_PAGE,
> > +	MMU_NOTIFY_SOFT_DIRTY,
> > +};

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
  2018-12-04  8:17   ` Mike Rapoport
@ 2018-12-04 23:21   ` Andrew Morton
  2018-12-06 20:53   ` kbuild test robot
  2018-12-06 21:19   ` kbuild test robot
  3 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2018-12-04 23:21 UTC (permalink / raw)
  To: jglisse
  Cc: linux-mm, linux-kernel, Matthew Wilcox, Ross Zwisler, Jan Kara,
	Dan Williams, Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

On Mon,  3 Dec 2018 15:18:17 -0500 jglisse@redhat.com wrote:

> CPU page table update can happens for many reasons, not only as a result
> of a syscall (munmap(), mprotect(), mremap(), madvise(), ...) but also
> as a result of kernel activities (memory compression, reclaim, migration,
> ...).
> 
> Users of mmu notifier API track changes to the CPU page table and take
> specific action for them. While current API only provide range of virtual
> address affected by the change, not why the changes is happening.
> 
> This patchset adds event information so that users of mmu notifier can
> differentiate among broad category:
>     - UNMAP: munmap() or mremap()
>     - CLEAR: page table is cleared (migration, compaction, reclaim, ...)
>     - PROTECTION_VMA: change in access protections for the range
>     - PROTECTION_PAGE: change in access protections for page in the range
>     - SOFT_DIRTY: soft dirtyness tracking
> 
> Being able to identify munmap() and mremap() from other reasons why the
> page table is cleared is important to allow user of mmu notifier to
> update their own internal tracking structure accordingly (on munmap or
> mremap it is not longer needed to track range of virtual address as it
> becomes invalid).
> 
> ...
>
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -519,6 +519,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
>  			struct mmu_notifier_range range;
>  			struct mmu_gather tlb;
>  
> +			range.event = MMU_NOTIFY_CLEAR;
>  			range.start = vma->vm_start;
>  			range.end = vma->vm_end;
>  			range.mm = mm;

mmu_notifier_range and MMU_NOTIFY_CLEAR aren't defined if
CONFIG_MMU_NOTIFIER=n.

I'll try a temporary bodge:

+++ a/include/linux/mmu_notifier.h
@@ -10,8 +10,6 @@
 struct mmu_notifier;
 struct mmu_notifier_ops;
 
-#ifdef CONFIG_MMU_NOTIFIER
-
 /*
  * The mmu notifier_mm structure is allocated and installed in
  * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
@@ -32,6 +30,8 @@ struct mmu_notifier_range {
 	bool blockable;
 };
 
+#ifdef CONFIG_MMU_NOTIFIER
+
 struct mmu_notifier_ops {
 	/*
 	 * Called either by mmu_notifier_unregister or when the mm is


But this new code should vanish altogether if CONFIG_MMU_NOTIFIER=n,
please.  Or at least, we shouldn't be unnecessarily initializing .mm
and .event.  Please take a look at debloating this code.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
  2018-12-04  0:09   ` Jerome Glisse
@ 2018-12-05 11:04   ` Jan Kara
  2018-12-05 15:53     ` Jerome Glisse
  2018-12-06 20:31   ` kbuild test robot
  2018-12-06 20:35   ` kbuild test robot
  3 siblings, 1 reply; 17+ messages in thread
From: Jan Kara @ 2018-12-05 11:04 UTC (permalink / raw)
  To: jglisse
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Ross Zwisler, Jan Kara, Dan Williams, Paolo Bonzini,
	Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

Hi Jerome!

On Mon 03-12-18 15:18:16, jglisse@redhat.com wrote:
> From: Jérôme Glisse <jglisse@redhat.com>
> 
> To avoid having to change many call sites everytime we want to add a
> parameter use a structure to group all parameters for the mmu_notifier
> invalidate_range_start/end cakks. No functional changes with this
> patch.

Two suggestions for the patch below:

> @@ -772,7 +775,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
>  		 * call mmu_notifier_invalidate_range_start() on our behalf
>  		 * before taking any lock.
>  		 */
> -		if (follow_pte_pmd(vma->vm_mm, address, &start, &end, &ptep, &pmdp, &ptl))
> +		if (follow_pte_pmd(vma->vm_mm, address, &range,
> +				   &ptep, &pmdp, &ptl))
>  			continue;

The change of follow_pte_pmd() arguments looks unexpected. Why should that
care about mmu notifier range? I see it may be convenient but it doesn't look
like a good API to me.

> @@ -1139,11 +1140,15 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>  				downgrade_write(&mm->mmap_sem);
>  				break;
>  			}
> -			mmu_notifier_invalidate_range_start(mm, 0, -1);
> +
> +			range.start = 0;
> +			range.end = -1UL;
> +			range.mm = mm;
> +			mmu_notifier_invalidate_range_start(&range);

Also how about providing initializer for struct mmu_notifier_range? Or
something like DECLARE_MMU_NOTIFIER_RANGE? That will make sure that
unused arguments for particular notification places have defined values and
also if you add another mandatory argument (like you do in your third
patch), you just add another argument to the initializer and that way
the compiler makes sure you haven't missed any place. Finally the code will
remain more compact that way (less lines needed to initialize the struct).

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-05 11:04   ` Jan Kara
@ 2018-12-05 15:53     ` Jerome Glisse
  2018-12-05 16:28       ` Jan Kara
  0 siblings, 1 reply; 17+ messages in thread
From: Jerome Glisse @ 2018-12-05 15:53 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Ross Zwisler, Dan Williams, Paolo Bonzini,
	Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

On Wed, Dec 05, 2018 at 12:04:16PM +0100, Jan Kara wrote:
> Hi Jerome!
> 
> On Mon 03-12-18 15:18:16, jglisse@redhat.com wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>
> > 
> > To avoid having to change many call sites everytime we want to add a
> > parameter use a structure to group all parameters for the mmu_notifier
> > invalidate_range_start/end cakks. No functional changes with this
> > patch.
> 
> Two suggestions for the patch below:
> 
> > @@ -772,7 +775,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
> >  		 * call mmu_notifier_invalidate_range_start() on our behalf
> >  		 * before taking any lock.
> >  		 */
> > -		if (follow_pte_pmd(vma->vm_mm, address, &start, &end, &ptep, &pmdp, &ptl))
> > +		if (follow_pte_pmd(vma->vm_mm, address, &range,
> > +				   &ptep, &pmdp, &ptl))
> >  			continue;
> 
> The change of follow_pte_pmd() arguments looks unexpected. Why should that
> care about mmu notifier range? I see it may be convenient but it doesn't look
> like a good API to me.

Saddly i do not see a way around that one this is because of fs/dax.c
which does the mmu_notifier_invalidate_range_end while follow_pte_pmd
do the mmu_notifier_invalidate_range_start

follow_pte_pmd does adjust the start and end address so that the dax
function does not have the logic to find those address. So instead of
duplicating that follow_pte_pmd inside the dax code i rather passed
around the range struct to follow_pte_pmd.

> 
> > @@ -1139,11 +1140,15 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> >  				downgrade_write(&mm->mmap_sem);
> >  				break;
> >  			}
> > -			mmu_notifier_invalidate_range_start(mm, 0, -1);
> > +
> > +			range.start = 0;
> > +			range.end = -1UL;
> > +			range.mm = mm;
> > +			mmu_notifier_invalidate_range_start(&range);
> 
> Also how about providing initializer for struct mmu_notifier_range? Or
> something like DECLARE_MMU_NOTIFIER_RANGE? That will make sure that
> unused arguments for particular notification places have defined values and
> also if you add another mandatory argument (like you do in your third
> patch), you just add another argument to the initializer and that way
> the compiler makes sure you haven't missed any place. Finally the code will
> remain more compact that way (less lines needed to initialize the struct).

That is what i do in v2 :)

Thank you for looking to all this.

Cheers,
Jérôme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-05 15:53     ` Jerome Glisse
@ 2018-12-05 16:28       ` Jan Kara
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Kara @ 2018-12-05 16:28 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: Jan Kara, linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Ross Zwisler, Dan Williams, Paolo Bonzini,
	Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

On Wed 05-12-18 10:53:57, Jerome Glisse wrote:
> On Wed, Dec 05, 2018 at 12:04:16PM +0100, Jan Kara wrote:
> > Hi Jerome!
> > 
> > On Mon 03-12-18 15:18:16, jglisse@redhat.com wrote:
> > > From: Jérôme Glisse <jglisse@redhat.com>
> > > 
> > > To avoid having to change many call sites everytime we want to add a
> > > parameter use a structure to group all parameters for the mmu_notifier
> > > invalidate_range_start/end cakks. No functional changes with this
> > > patch.
> > 
> > Two suggestions for the patch below:
> > 
> > > @@ -772,7 +775,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
> > >  		 * call mmu_notifier_invalidate_range_start() on our behalf
> > >  		 * before taking any lock.
> > >  		 */
> > > -		if (follow_pte_pmd(vma->vm_mm, address, &start, &end, &ptep, &pmdp, &ptl))
> > > +		if (follow_pte_pmd(vma->vm_mm, address, &range,
> > > +				   &ptep, &pmdp, &ptl))
> > >  			continue;
> > 
> > The change of follow_pte_pmd() arguments looks unexpected. Why should that
> > care about mmu notifier range? I see it may be convenient but it doesn't look
> > like a good API to me.
> 
> Saddly i do not see a way around that one this is because of fs/dax.c
> which does the mmu_notifier_invalidate_range_end while follow_pte_pmd
> do the mmu_notifier_invalidate_range_start

I see so this is really a preexisting problem with follow_pte_pmd() having
ugly interface. After some thoughts I think your patch actually slightly
improves the situation so OK.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
  2018-12-04  0:09   ` Jerome Glisse
  2018-12-05 11:04   ` Jan Kara
@ 2018-12-06 20:31   ` kbuild test robot
  2018-12-06 20:35   ` kbuild test robot
  3 siblings, 0 replies; 17+ messages in thread
From: kbuild test robot @ 2018-12-06 20:31 UTC (permalink / raw)
  To: jglisse
  Cc: kbuild-all, linux-mm, Andrew Morton, linux-kernel,
	Jérôme Glisse, Matthew Wilcox, Ross Zwisler, Jan Kara,
	Dan Williams, Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

[-- Attachment #1: Type: text/plain, Size: 3957 bytes --]

Hi Jérôme,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.20-rc5]
[cannot apply to next-20181206]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/jglisse-redhat-com/mmu-notifier-contextual-informations/20181207-031930
config: x86_64-randconfig-x017-201848 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   fs//proc/task_mmu.c: In function 'clear_refs_write':
>> fs//proc/task_mmu.c:1099:29: error: storage size of 'range' isn't known
      struct mmu_notifier_range range;
                                ^~~~~
   fs//proc/task_mmu.c:1099:29: warning: unused variable 'range' [-Wunused-variable]

vim +1099 fs//proc/task_mmu.c

  1069	
  1070	static ssize_t clear_refs_write(struct file *file, const char __user *buf,
  1071					size_t count, loff_t *ppos)
  1072	{
  1073		struct task_struct *task;
  1074		char buffer[PROC_NUMBUF];
  1075		struct mm_struct *mm;
  1076		struct vm_area_struct *vma;
  1077		enum clear_refs_types type;
  1078		struct mmu_gather tlb;
  1079		int itype;
  1080		int rv;
  1081	
  1082		memset(buffer, 0, sizeof(buffer));
  1083		if (count > sizeof(buffer) - 1)
  1084			count = sizeof(buffer) - 1;
  1085		if (copy_from_user(buffer, buf, count))
  1086			return -EFAULT;
  1087		rv = kstrtoint(strstrip(buffer), 10, &itype);
  1088		if (rv < 0)
  1089			return rv;
  1090		type = (enum clear_refs_types)itype;
  1091		if (type < CLEAR_REFS_ALL || type >= CLEAR_REFS_LAST)
  1092			return -EINVAL;
  1093	
  1094		task = get_proc_task(file_inode(file));
  1095		if (!task)
  1096			return -ESRCH;
  1097		mm = get_task_mm(task);
  1098		if (mm) {
> 1099			struct mmu_notifier_range range;
  1100			struct clear_refs_private cp = {
  1101				.type = type,
  1102			};
  1103			struct mm_walk clear_refs_walk = {
  1104				.pmd_entry = clear_refs_pte_range,
  1105				.test_walk = clear_refs_test_walk,
  1106				.mm = mm,
  1107				.private = &cp,
  1108			};
  1109	
  1110			if (type == CLEAR_REFS_MM_HIWATER_RSS) {
  1111				if (down_write_killable(&mm->mmap_sem)) {
  1112					count = -EINTR;
  1113					goto out_mm;
  1114				}
  1115	
  1116				/*
  1117				 * Writing 5 to /proc/pid/clear_refs resets the peak
  1118				 * resident set size to this mm's current rss value.
  1119				 */
  1120				reset_mm_hiwater_rss(mm);
  1121				up_write(&mm->mmap_sem);
  1122				goto out_mm;
  1123			}
  1124	
  1125			down_read(&mm->mmap_sem);
  1126			tlb_gather_mmu(&tlb, mm, 0, -1);
  1127			if (type == CLEAR_REFS_SOFT_DIRTY) {
  1128				for (vma = mm->mmap; vma; vma = vma->vm_next) {
  1129					if (!(vma->vm_flags & VM_SOFTDIRTY))
  1130						continue;
  1131					up_read(&mm->mmap_sem);
  1132					if (down_write_killable(&mm->mmap_sem)) {
  1133						count = -EINTR;
  1134						goto out_mm;
  1135					}
  1136					for (vma = mm->mmap; vma; vma = vma->vm_next) {
  1137						vma->vm_flags &= ~VM_SOFTDIRTY;
  1138						vma_set_page_prot(vma);
  1139					}
  1140					downgrade_write(&mm->mmap_sem);
  1141					break;
  1142				}
  1143	
  1144				range.start = 0;
  1145				range.end = -1UL;
  1146				range.mm = mm;
  1147				mmu_notifier_invalidate_range_start(&range);
  1148			}
  1149			walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
  1150			if (type == CLEAR_REFS_SOFT_DIRTY)
  1151				mmu_notifier_invalidate_range_end(&range);
  1152			tlb_finish_mmu(&tlb, 0, -1);
  1153			up_read(&mm->mmap_sem);
  1154	out_mm:
  1155			mmput(mm);
  1156		}
  1157		put_task_struct(task);
  1158	
  1159		return count;
  1160	}
  1161	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25413 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls
  2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
                     ` (2 preceding siblings ...)
  2018-12-06 20:31   ` kbuild test robot
@ 2018-12-06 20:35   ` kbuild test robot
  3 siblings, 0 replies; 17+ messages in thread
From: kbuild test robot @ 2018-12-06 20:35 UTC (permalink / raw)
  To: jglisse
  Cc: kbuild-all, linux-mm, Andrew Morton, linux-kernel,
	Jérôme Glisse, Matthew Wilcox, Ross Zwisler, Jan Kara,
	Dan Williams, Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, dri-devel, linux-rdma, linux-fsdevel

[-- Attachment #1: Type: text/plain, Size: 3674 bytes --]

Hi Jérôme,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.20-rc5]
[cannot apply to next-20181206]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/jglisse-redhat-com/mmu-notifier-contextual-informations/20181207-031930
config: i386-randconfig-x007-201848 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   kernel/events/uprobes.c: In function '__replace_page':
>> kernel/events/uprobes.c:174:28: error: storage size of 'range' isn't known
     struct mmu_notifier_range range;
                               ^~~~~
   kernel/events/uprobes.c:174:28: warning: unused variable 'range' [-Wunused-variable]

vim +174 kernel/events/uprobes.c

   152	
   153	/**
   154	 * __replace_page - replace page in vma by new page.
   155	 * based on replace_page in mm/ksm.c
   156	 *
   157	 * @vma:      vma that holds the pte pointing to page
   158	 * @addr:     address the old @page is mapped at
   159	 * @page:     the cowed page we are replacing by kpage
   160	 * @kpage:    the modified page we replace page by
   161	 *
   162	 * Returns 0 on success, -EFAULT on failure.
   163	 */
   164	static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
   165					struct page *old_page, struct page *new_page)
   166	{
   167		struct mm_struct *mm = vma->vm_mm;
   168		struct page_vma_mapped_walk pvmw = {
   169			.page = old_page,
   170			.vma = vma,
   171			.address = addr,
   172		};
   173		int err;
 > 174		struct mmu_notifier_range range;
   175		struct mem_cgroup *memcg;
   176	
   177		range.start = addr;
   178		range.end = addr + PAGE_SIZE;
   179		range.mm = mm;
   180	
   181		VM_BUG_ON_PAGE(PageTransHuge(old_page), old_page);
   182	
   183		err = mem_cgroup_try_charge(new_page, vma->vm_mm, GFP_KERNEL, &memcg,
   184				false);
   185		if (err)
   186			return err;
   187	
   188		/* For try_to_free_swap() and munlock_vma_page() below */
   189		lock_page(old_page);
   190	
   191		mmu_notifier_invalidate_range_start(&range);
   192		err = -EAGAIN;
   193		if (!page_vma_mapped_walk(&pvmw)) {
   194			mem_cgroup_cancel_charge(new_page, memcg, false);
   195			goto unlock;
   196		}
   197		VM_BUG_ON_PAGE(addr != pvmw.address, old_page);
   198	
   199		get_page(new_page);
   200		page_add_new_anon_rmap(new_page, vma, addr, false);
   201		mem_cgroup_commit_charge(new_page, memcg, false, false);
   202		lru_cache_add_active_or_unevictable(new_page, vma);
   203	
   204		if (!PageAnon(old_page)) {
   205			dec_mm_counter(mm, mm_counter_file(old_page));
   206			inc_mm_counter(mm, MM_ANONPAGES);
   207		}
   208	
   209		flush_cache_page(vma, addr, pte_pfn(*pvmw.pte));
   210		ptep_clear_flush_notify(vma, addr, pvmw.pte);
   211		set_pte_at_notify(mm, addr, pvmw.pte,
   212				mk_pte(new_page, vma->vm_page_prot));
   213	
   214		page_remove_rmap(old_page, false);
   215		if (!page_mapped(old_page))
   216			try_to_free_swap(old_page);
   217		page_vma_mapped_walk_done(&pvmw);
   218	
   219		if (vma->vm_flags & VM_LOCKED)
   220			munlock_vma_page(old_page);
   221		put_page(old_page);
   222	
   223		err = 0;
   224	 unlock:
   225		mmu_notifier_invalidate_range_end(&range);
   226		unlock_page(old_page);
   227		return err;
   228	}
   229	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23952 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
  2018-12-04  8:17   ` Mike Rapoport
  2018-12-04 23:21   ` Andrew Morton
@ 2018-12-06 20:53   ` kbuild test robot
  2018-12-06 21:19   ` kbuild test robot
  3 siblings, 0 replies; 17+ messages in thread
From: kbuild test robot @ 2018-12-06 20:53 UTC (permalink / raw)
  To: jglisse
  Cc: kbuild-all, linux-mm, Andrew Morton, linux-kernel,
	Jérôme Glisse, Matthew Wilcox, Ross Zwisler, Jan Kara,
	Dan Williams, Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

[-- Attachment #1: Type: text/plain, Size: 4149 bytes --]

Hi Jérôme,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.20-rc5]
[cannot apply to next-20181206]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/jglisse-redhat-com/mmu-notifier-contextual-informations/20181207-031930
config: i386-randconfig-x007-201848 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   kernel/events/uprobes.c: In function '__replace_page':
   kernel/events/uprobes.c:174:28: error: storage size of 'range' isn't known
     struct mmu_notifier_range range;
                               ^~~~~
>> kernel/events/uprobes.c:177:16: error: 'MMU_NOTIFY_CLEAR' undeclared (first use in this function); did you mean 'VM_ARCH_CLEAR'?
     range.event = MMU_NOTIFY_CLEAR;
                   ^~~~~~~~~~~~~~~~
                   VM_ARCH_CLEAR
   kernel/events/uprobes.c:177:16: note: each undeclared identifier is reported only once for each function it appears in
   kernel/events/uprobes.c:174:28: warning: unused variable 'range' [-Wunused-variable]
     struct mmu_notifier_range range;
                               ^~~~~

vim +177 kernel/events/uprobes.c

   152	
   153	/**
   154	 * __replace_page - replace page in vma by new page.
   155	 * based on replace_page in mm/ksm.c
   156	 *
   157	 * @vma:      vma that holds the pte pointing to page
   158	 * @addr:     address the old @page is mapped at
   159	 * @page:     the cowed page we are replacing by kpage
   160	 * @kpage:    the modified page we replace page by
   161	 *
   162	 * Returns 0 on success, -EFAULT on failure.
   163	 */
   164	static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
   165					struct page *old_page, struct page *new_page)
   166	{
   167		struct mm_struct *mm = vma->vm_mm;
   168		struct page_vma_mapped_walk pvmw = {
   169			.page = old_page,
   170			.vma = vma,
   171			.address = addr,
   172		};
   173		int err;
 > 174		struct mmu_notifier_range range;
   175		struct mem_cgroup *memcg;
   176	
 > 177		range.event = MMU_NOTIFY_CLEAR;
   178		range.start = addr;
   179		range.end = addr + PAGE_SIZE;
   180		range.mm = mm;
   181	
   182		VM_BUG_ON_PAGE(PageTransHuge(old_page), old_page);
   183	
   184		err = mem_cgroup_try_charge(new_page, vma->vm_mm, GFP_KERNEL, &memcg,
   185				false);
   186		if (err)
   187			return err;
   188	
   189		/* For try_to_free_swap() and munlock_vma_page() below */
   190		lock_page(old_page);
   191	
   192		mmu_notifier_invalidate_range_start(&range);
   193		err = -EAGAIN;
   194		if (!page_vma_mapped_walk(&pvmw)) {
   195			mem_cgroup_cancel_charge(new_page, memcg, false);
   196			goto unlock;
   197		}
   198		VM_BUG_ON_PAGE(addr != pvmw.address, old_page);
   199	
   200		get_page(new_page);
   201		page_add_new_anon_rmap(new_page, vma, addr, false);
   202		mem_cgroup_commit_charge(new_page, memcg, false, false);
   203		lru_cache_add_active_or_unevictable(new_page, vma);
   204	
   205		if (!PageAnon(old_page)) {
   206			dec_mm_counter(mm, mm_counter_file(old_page));
   207			inc_mm_counter(mm, MM_ANONPAGES);
   208		}
   209	
   210		flush_cache_page(vma, addr, pte_pfn(*pvmw.pte));
   211		ptep_clear_flush_notify(vma, addr, pvmw.pte);
   212		set_pte_at_notify(mm, addr, pvmw.pte,
   213				mk_pte(new_page, vma->vm_page_prot));
   214	
   215		page_remove_rmap(old_page, false);
   216		if (!page_mapped(old_page))
   217			try_to_free_swap(old_page);
   218		page_vma_mapped_walk_done(&pvmw);
   219	
   220		if (vma->vm_flags & VM_LOCKED)
   221			munlock_vma_page(old_page);
   222		put_page(old_page);
   223	
   224		err = 0;
   225	 unlock:
   226		mmu_notifier_invalidate_range_end(&range);
   227		unlock_page(old_page);
   228		return err;
   229	}
   230	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23952 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
                     ` (2 preceding siblings ...)
  2018-12-06 20:53   ` kbuild test robot
@ 2018-12-06 21:19   ` kbuild test robot
  2018-12-06 21:51     ` Jerome Glisse
  3 siblings, 1 reply; 17+ messages in thread
From: kbuild test robot @ 2018-12-06 21:19 UTC (permalink / raw)
  To: jglisse
  Cc: kbuild-all, linux-mm, Andrew Morton, linux-kernel,
	Jérôme Glisse, Matthew Wilcox, Ross Zwisler, Jan Kara,
	Dan Williams, Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

[-- Attachment #1: Type: text/plain, Size: 4478 bytes --]

Hi Jérôme,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.20-rc5]
[cannot apply to next-20181206]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/jglisse-redhat-com/mmu-notifier-contextual-informations/20181207-031930
config: x86_64-randconfig-x017-201848 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   fs///proc/task_mmu.c: In function 'clear_refs_write':
   fs///proc/task_mmu.c:1099:29: error: storage size of 'range' isn't known
      struct mmu_notifier_range range;
                                ^~~~~
>> fs///proc/task_mmu.c:1147:18: error: 'MMU_NOTIFY_SOFT_DIRTY' undeclared (first use in this function); did you mean 'CLEAR_REFS_SOFT_DIRTY'?
       range.event = MMU_NOTIFY_SOFT_DIRTY;
                     ^~~~~~~~~~~~~~~~~~~~~
                     CLEAR_REFS_SOFT_DIRTY
   fs///proc/task_mmu.c:1147:18: note: each undeclared identifier is reported only once for each function it appears in
   fs///proc/task_mmu.c:1099:29: warning: unused variable 'range' [-Wunused-variable]
      struct mmu_notifier_range range;
                                ^~~~~

vim +1147 fs///proc/task_mmu.c

  1069	
  1070	static ssize_t clear_refs_write(struct file *file, const char __user *buf,
  1071					size_t count, loff_t *ppos)
  1072	{
  1073		struct task_struct *task;
  1074		char buffer[PROC_NUMBUF];
  1075		struct mm_struct *mm;
  1076		struct vm_area_struct *vma;
  1077		enum clear_refs_types type;
  1078		struct mmu_gather tlb;
  1079		int itype;
  1080		int rv;
  1081	
  1082		memset(buffer, 0, sizeof(buffer));
  1083		if (count > sizeof(buffer) - 1)
  1084			count = sizeof(buffer) - 1;
  1085		if (copy_from_user(buffer, buf, count))
  1086			return -EFAULT;
  1087		rv = kstrtoint(strstrip(buffer), 10, &itype);
  1088		if (rv < 0)
  1089			return rv;
  1090		type = (enum clear_refs_types)itype;
  1091		if (type < CLEAR_REFS_ALL || type >= CLEAR_REFS_LAST)
  1092			return -EINVAL;
  1093	
  1094		task = get_proc_task(file_inode(file));
  1095		if (!task)
  1096			return -ESRCH;
  1097		mm = get_task_mm(task);
  1098		if (mm) {
> 1099			struct mmu_notifier_range range;
  1100			struct clear_refs_private cp = {
  1101				.type = type,
  1102			};
  1103			struct mm_walk clear_refs_walk = {
  1104				.pmd_entry = clear_refs_pte_range,
  1105				.test_walk = clear_refs_test_walk,
  1106				.mm = mm,
  1107				.private = &cp,
  1108			};
  1109	
  1110			if (type == CLEAR_REFS_MM_HIWATER_RSS) {
  1111				if (down_write_killable(&mm->mmap_sem)) {
  1112					count = -EINTR;
  1113					goto out_mm;
  1114				}
  1115	
  1116				/*
  1117				 * Writing 5 to /proc/pid/clear_refs resets the peak
  1118				 * resident set size to this mm's current rss value.
  1119				 */
  1120				reset_mm_hiwater_rss(mm);
  1121				up_write(&mm->mmap_sem);
  1122				goto out_mm;
  1123			}
  1124	
  1125			down_read(&mm->mmap_sem);
  1126			tlb_gather_mmu(&tlb, mm, 0, -1);
  1127			if (type == CLEAR_REFS_SOFT_DIRTY) {
  1128				for (vma = mm->mmap; vma; vma = vma->vm_next) {
  1129					if (!(vma->vm_flags & VM_SOFTDIRTY))
  1130						continue;
  1131					up_read(&mm->mmap_sem);
  1132					if (down_write_killable(&mm->mmap_sem)) {
  1133						count = -EINTR;
  1134						goto out_mm;
  1135					}
  1136					for (vma = mm->mmap; vma; vma = vma->vm_next) {
  1137						vma->vm_flags &= ~VM_SOFTDIRTY;
  1138						vma_set_page_prot(vma);
  1139					}
  1140					downgrade_write(&mm->mmap_sem);
  1141					break;
  1142				}
  1143	
  1144				range.start = 0;
  1145				range.end = -1UL;
  1146				range.mm = mm;
> 1147				range.event = MMU_NOTIFY_SOFT_DIRTY;
  1148				mmu_notifier_invalidate_range_start(&range);
  1149			}
  1150			walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
  1151			if (type == CLEAR_REFS_SOFT_DIRTY)
  1152				mmu_notifier_invalidate_range_end(&range);
  1153			tlb_finish_mmu(&tlb, 0, -1);
  1154			up_read(&mm->mmap_sem);
  1155	out_mm:
  1156			mmput(mm);
  1157		}
  1158		put_task_struct(task);
  1159	
  1160		return count;
  1161	}
  1162	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25413 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation
  2018-12-06 21:19   ` kbuild test robot
@ 2018-12-06 21:51     ` Jerome Glisse
  0 siblings, 0 replies; 17+ messages in thread
From: Jerome Glisse @ 2018-12-06 21:51 UTC (permalink / raw)
  To: Andrew Morton, kbuild test robot
  Cc: kbuild-all, linux-mm, Andrew Morton, linux-kernel,
	Matthew Wilcox, Ross Zwisler, Jan Kara, Dan Williams,
	Paolo Bonzini, Radim Krčmář,
	Michal Hocko, Christian Koenig, Felix Kuehling, Ralph Campbell,
	John Hubbard, kvm, linux-rdma, linux-fsdevel, dri-devel

Should be all fixed in v2 i built with and without mmu notifier and
did not had any issue in v2.

On Fri, Dec 07, 2018 at 05:19:21AM +0800, kbuild test robot wrote:
> Hi Jérôme,
> 
> I love your patch! Yet something to improve:
> 
> [auto build test ERROR on linus/master]
> [also build test ERROR on v4.20-rc5]
> [cannot apply to next-20181206]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/jglisse-redhat-com/mmu-notifier-contextual-informations/20181207-031930
> config: x86_64-randconfig-x017-201848 (attached as .config)
> compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=x86_64 
> 
> All errors (new ones prefixed by >>):
> 
>    fs///proc/task_mmu.c: In function 'clear_refs_write':
>    fs///proc/task_mmu.c:1099:29: error: storage size of 'range' isn't known
>       struct mmu_notifier_range range;
>                                 ^~~~~
> >> fs///proc/task_mmu.c:1147:18: error: 'MMU_NOTIFY_SOFT_DIRTY' undeclared (first use in this function); did you mean 'CLEAR_REFS_SOFT_DIRTY'?
>        range.event = MMU_NOTIFY_SOFT_DIRTY;
>                      ^~~~~~~~~~~~~~~~~~~~~
>                      CLEAR_REFS_SOFT_DIRTY
>    fs///proc/task_mmu.c:1147:18: note: each undeclared identifier is reported only once for each function it appears in
>    fs///proc/task_mmu.c:1099:29: warning: unused variable 'range' [-Wunused-variable]
>       struct mmu_notifier_range range;
>                                 ^~~~~
> 
> vim +1147 fs///proc/task_mmu.c
> 
>   1069	
>   1070	static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>   1071					size_t count, loff_t *ppos)
>   1072	{
>   1073		struct task_struct *task;
>   1074		char buffer[PROC_NUMBUF];
>   1075		struct mm_struct *mm;
>   1076		struct vm_area_struct *vma;
>   1077		enum clear_refs_types type;
>   1078		struct mmu_gather tlb;
>   1079		int itype;
>   1080		int rv;
>   1081	
>   1082		memset(buffer, 0, sizeof(buffer));
>   1083		if (count > sizeof(buffer) - 1)
>   1084			count = sizeof(buffer) - 1;
>   1085		if (copy_from_user(buffer, buf, count))
>   1086			return -EFAULT;
>   1087		rv = kstrtoint(strstrip(buffer), 10, &itype);
>   1088		if (rv < 0)
>   1089			return rv;
>   1090		type = (enum clear_refs_types)itype;
>   1091		if (type < CLEAR_REFS_ALL || type >= CLEAR_REFS_LAST)
>   1092			return -EINVAL;
>   1093	
>   1094		task = get_proc_task(file_inode(file));
>   1095		if (!task)
>   1096			return -ESRCH;
>   1097		mm = get_task_mm(task);
>   1098		if (mm) {
> > 1099			struct mmu_notifier_range range;
>   1100			struct clear_refs_private cp = {
>   1101				.type = type,
>   1102			};
>   1103			struct mm_walk clear_refs_walk = {
>   1104				.pmd_entry = clear_refs_pte_range,
>   1105				.test_walk = clear_refs_test_walk,
>   1106				.mm = mm,
>   1107				.private = &cp,
>   1108			};
>   1109	
>   1110			if (type == CLEAR_REFS_MM_HIWATER_RSS) {
>   1111				if (down_write_killable(&mm->mmap_sem)) {
>   1112					count = -EINTR;
>   1113					goto out_mm;
>   1114				}
>   1115	
>   1116				/*
>   1117				 * Writing 5 to /proc/pid/clear_refs resets the peak
>   1118				 * resident set size to this mm's current rss value.
>   1119				 */
>   1120				reset_mm_hiwater_rss(mm);
>   1121				up_write(&mm->mmap_sem);
>   1122				goto out_mm;
>   1123			}
>   1124	
>   1125			down_read(&mm->mmap_sem);
>   1126			tlb_gather_mmu(&tlb, mm, 0, -1);
>   1127			if (type == CLEAR_REFS_SOFT_DIRTY) {
>   1128				for (vma = mm->mmap; vma; vma = vma->vm_next) {
>   1129					if (!(vma->vm_flags & VM_SOFTDIRTY))
>   1130						continue;
>   1131					up_read(&mm->mmap_sem);
>   1132					if (down_write_killable(&mm->mmap_sem)) {
>   1133						count = -EINTR;
>   1134						goto out_mm;
>   1135					}
>   1136					for (vma = mm->mmap; vma; vma = vma->vm_next) {
>   1137						vma->vm_flags &= ~VM_SOFTDIRTY;
>   1138						vma_set_page_prot(vma);
>   1139					}
>   1140					downgrade_write(&mm->mmap_sem);
>   1141					break;
>   1142				}
>   1143	
>   1144				range.start = 0;
>   1145				range.end = -1UL;
>   1146				range.mm = mm;
> > 1147				range.event = MMU_NOTIFY_SOFT_DIRTY;
>   1148				mmu_notifier_invalidate_range_start(&range);
>   1149			}
>   1150			walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
>   1151			if (type == CLEAR_REFS_SOFT_DIRTY)
>   1152				mmu_notifier_invalidate_range_end(&range);
>   1153			tlb_finish_mmu(&tlb, 0, -1);
>   1154			up_read(&mm->mmap_sem);
>   1155	out_mm:
>   1156			mmput(mm);
>   1157		}
>   1158		put_task_struct(task);
>   1159	
>   1160		return count;
>   1161	}
>   1162	
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-12-06 21:51 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-03 20:18 [PATCH 0/3] mmu notifier contextual informations jglisse
2018-12-03 20:18 ` [PATCH 1/3] mm/mmu_notifier: use structure for invalidate_range_start/end callback jglisse
2018-12-03 20:18 ` [PATCH 2/3] mm/mmu_notifier: use structure for invalidate_range_start/end calls jglisse
2018-12-04  0:09   ` Jerome Glisse
2018-12-05 11:04   ` Jan Kara
2018-12-05 15:53     ` Jerome Glisse
2018-12-05 16:28       ` Jan Kara
2018-12-06 20:31   ` kbuild test robot
2018-12-06 20:35   ` kbuild test robot
2018-12-03 20:18 ` [PATCH 3/3] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
2018-12-04  8:17   ` Mike Rapoport
2018-12-04 14:48     ` Jerome Glisse
2018-12-04 23:21   ` Andrew Morton
2018-12-06 20:53   ` kbuild test robot
2018-12-06 21:19   ` kbuild test robot
2018-12-06 21:51     ` Jerome Glisse
2018-12-04  7:35 ` [PATCH 0/3] mmu notifier contextual informations Koenig, Christian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).