linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
@ 2019-10-15 18:12 Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
                   ` (17 more replies)
  0 siblings, 18 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

From: Jason Gunthorpe <jgg@mellanox.com>

8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others are
simple linear search lists.

Of the ones I checked they largely seem to have various kinds of races,
bugs and poor implementation. This is a result of the complexity in how
the notifier interacts with get_user_pages(). It is extremely difficult to
use it correctly.

Consolidate all of this code together into the core mmu_notifier and
provide a locking scheme similar to hmm_mirror that allows the user to
safely use get_user_pages() and reliably know if the page list still
matches the mm.

This new arrangment plays nicely with the !blockable mode for
OOM. Scanning the interval tree is done such that the intersection test
will always succeed, and since there is no invalidate_range_end exposed to
drivers the scheme safely allows multiple drivers to be subscribed.

Four places are converted as an example of how the new API is used.
Four are left for future patches:
 - i915_gem has complex locking around destruction of a registration,
   needs more study
 - hfi1 (2nd user) needs access to the rbtree
 - scif_dma has a complicated logic flow
 - vhost's mmu notifiers are already being rewritten

This is still being tested, but I figured to send it to start getting help
from the xen, amd and hfi drivers which I cannot test here.

It would be intended for the hmm tree.

Jason Gunthorpe (15):
  mm/mmu_notifier: define the header pre-processor parts even if
    disabled
  mm/mmu_notifier: add an interval tree notifier
  mm/hmm: allow hmm_range to be used with a mmu_range_notifier or
    hmm_mirror
  mm/hmm: define the pre-processor related parts of hmm.h even if
    disabled
  RDMA/odp: Use mmu_range_notifier_insert()
  RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv
  drm/radeon: use mmu_range_notifier_insert
  xen/gntdev: Use select for DMA_SHARED_BUFFER
  xen/gntdev: use mmu_range_notifier_insert
  nouveau: use mmu_notifier directly for invalidate_range_start
  nouveau: use mmu_range_notifier instead of hmm_mirror
  drm/amdgpu: Call find_vma under mmap_sem
  drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
  drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
  mm/hmm: remove hmm_mirror and related

 Documentation/vm/hmm.rst                      | 105 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |   2 +
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   9 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        | 445 ++------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |  53 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |  13 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 111 ++--
 drivers/gpu/drm/nouveau/nouveau_svm.c         | 229 +++++---
 drivers/gpu/drm/radeon/radeon.h               |   9 +-
 drivers/gpu/drm/radeon/radeon_mn.c            | 218 ++-----
 drivers/infiniband/core/device.c              |   1 -
 drivers/infiniband/core/umem_odp.c            | 288 +---------
 drivers/infiniband/hw/hfi1/file_ops.c         |   2 +-
 drivers/infiniband/hw/hfi1/hfi.h              |   2 +-
 drivers/infiniband/hw/hfi1/user_exp_rcv.c     | 144 ++---
 drivers/infiniband/hw/hfi1/user_exp_rcv.h     |   3 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |   7 +-
 drivers/infiniband/hw/mlx5/mr.c               |   3 +-
 drivers/infiniband/hw/mlx5/odp.c              |  48 +-
 drivers/xen/Kconfig                           |   3 +-
 drivers/xen/gntdev-common.h                   |   8 +-
 drivers/xen/gntdev.c                          | 179 ++----
 include/linux/hmm.h                           | 195 +------
 include/linux/mmu_notifier.h                  | 124 +++-
 include/rdma/ib_umem_odp.h                    |  65 +--
 include/rdma/ib_verbs.h                       |   2 -
 kernel/fork.c                                 |   1 -
 mm/Kconfig                                    |   2 +-
 mm/hmm.c                                      | 275 +--------
 mm/mmu_notifier.c                             | 542 +++++++++++++++++-
 32 files changed, 1180 insertions(+), 1923 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-21 18:32   ` Jerome Glisse
  2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

From: Jason Gunthorpe <jgg@mellanox.com>

Now that we have KERNEL_HEADER_TEST all headers are generally compile
tested, so relying on makefile tricks to avoid compiling code that depends
on CONFIG_MMU_NOTIFIER is more annoying.

Instead follow the usual pattern and provide most of the header with only
the functions stubbed out when CONFIG_MMU_NOTIFIER is disabled. This
ensures code compiles no matter what the config setting is.

While here, struct mmu_notifier_mm is private to mmu_notifier.c, move it.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 include/linux/mmu_notifier.h | 46 +++++++++++++-----------------------
 mm/mmu_notifier.c            | 13 ++++++++++
 2 files changed, 30 insertions(+), 29 deletions(-)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 1bd8e6a09a3c27..12bd603d318ce7 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -7,8 +7,9 @@
 #include <linux/mm_types.h>
 #include <linux/srcu.h>
 
+struct mmu_notifier_mm;
 struct mmu_notifier;
-struct mmu_notifier_ops;
+struct mmu_notifier_range;
 
 /**
  * enum mmu_notifier_event - reason for the mmu notifier callback
@@ -40,36 +41,8 @@ enum mmu_notifier_event {
 	MMU_NOTIFY_SOFT_DIRTY,
 };
 
-#ifdef CONFIG_MMU_NOTIFIER
-
-#ifdef CONFIG_LOCKDEP
-extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
-#endif
-
-/*
- * The mmu notifier_mm structure is allocated and installed in
- * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
- * critical section and it's released only when mm_count reaches zero
- * in mmdrop().
- */
-struct mmu_notifier_mm {
-	/* all mmu notifiers registerd in this mm are queued in this list */
-	struct hlist_head list;
-	/* to serialize the list modifications and hlist_unhashed */
-	spinlock_t lock;
-};
-
 #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0)
 
-struct mmu_notifier_range {
-	struct vm_area_struct *vma;
-	struct mm_struct *mm;
-	unsigned long start;
-	unsigned long end;
-	unsigned flags;
-	enum mmu_notifier_event event;
-};
-
 struct mmu_notifier_ops {
 	/*
 	 * Called either by mmu_notifier_unregister or when the mm is
@@ -249,6 +222,21 @@ struct mmu_notifier {
 	unsigned int users;
 };
 
+#ifdef CONFIG_MMU_NOTIFIER
+
+#ifdef CONFIG_LOCKDEP
+extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
+#endif
+
+struct mmu_notifier_range {
+	struct vm_area_struct *vma;
+	struct mm_struct *mm;
+	unsigned long start;
+	unsigned long end;
+	unsigned flags;
+	enum mmu_notifier_event event;
+};
+
 static inline int mm_has_notifiers(struct mm_struct *mm)
 {
 	return unlikely(mm->mmu_notifier_mm);
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 7fde88695f35d6..367670cfd02b7b 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -27,6 +27,19 @@ struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
 };
 #endif
 
+/*
+ * The mmu notifier_mm structure is allocated and installed in
+ * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
+ * critical section and it's released only when mm_count reaches zero
+ * in mmdrop().
+ */
+struct mmu_notifier_mm {
+	/* all mmu notifiers registered in this mm are queued in this list */
+	struct hlist_head list;
+	/* to serialize the list modifications and hlist_unhashed */
+	spinlock_t lock;
+};
+
 /*
  * This function can't run concurrently against mmu_notifier_register
  * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-21 18:30   ` Jerome Glisse
  2019-10-27 23:15   ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror Jason Gunthorpe
                   ` (15 subsequent siblings)
  17 siblings, 2 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Michal Hocko

From: Jason Gunthorpe <jgg@mellanox.com>

Of the 13 users of mmu_notifiers, 8 of them use only
invalidate_range_start/end() and immediately intersect the
mmu_notifier_range with some kind of internal list of VAs.  4 use an
interval tree (i915_gem, radeon_mn, umem_odp, hfi1). 4 use a linked list
of some kind (scif_dma, vhost, gntdev, hmm)

And the remaining 5 either don't use invalidate_range_start() or do some
special thing with it.

It turns out that building a correct scheme with an interval tree is
pretty complicated, particularly if the use case is synchronizing against
another thread doing get_user_pages().  Many of these implementations have
various subtle and difficult to fix races.

This approach puts the interval tree as common code at the top of the mmu
notifier call tree and implements a shareable locking scheme.

It includes:
 - An interval tree tracking VA ranges, with per-range callbacks
 - A read/write locking scheme for the interval tree that avoids
   sleeping in the notifier path (for OOM killer)
 - A sequence counter based collision-retry locking scheme to tell
   device page fault that a VA range is being concurrently invalidated.

This is based on various ideas:
- hmm accumulates invalidated VA ranges and releases them when all
  invalidates are done, via active_invalidate_ranges count.
  This approach avoids having to intersect the interval tree twice (as
  umem_odp does) at the potential cost of a longer device page fault.

- kvm/umem_odp use a sequence counter to drive the collision retry,
  via invalidate_seq

- a deferred work todo list on unlock scheme like RTNL, via deferred_list.
  This makes adding/removing interval tree members more deterministic

- seqlock, except this version makes the seqlock idea multi-holder on the
  write side by protecting it with active_invalidate_ranges and a spinlock

To minimize MM overhead when only the interval tree is being used, the
entire SRCU and hlist overheads are dropped using some simple
branches. Similarly the interval tree overhead is dropped when in hlist
mode.

The overhead from the mandatory spinlock is broadly the same as most of
existing users which already had a lock (or two) of some sort on the
invalidation path.

Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 include/linux/mmu_notifier.h |  78 ++++++
 mm/Kconfig                   |   1 +
 mm/mmu_notifier.c            | 529 +++++++++++++++++++++++++++++++++--
 3 files changed, 583 insertions(+), 25 deletions(-)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 12bd603d318ce7..bc2b12483de127 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -6,10 +6,12 @@
 #include <linux/spinlock.h>
 #include <linux/mm_types.h>
 #include <linux/srcu.h>
+#include <linux/interval_tree.h>
 
 struct mmu_notifier_mm;
 struct mmu_notifier;
 struct mmu_notifier_range;
+struct mmu_range_notifier;
 
 /**
  * enum mmu_notifier_event - reason for the mmu notifier callback
@@ -32,6 +34,9 @@ struct mmu_notifier_range;
  * access flags). User should soft dirty the page in the end callback to make
  * sure that anyone relying on soft dirtyness catch pages that might be written
  * through non CPU mappings.
+ *
+ * @MMU_NOTIFY_RELEASE: used during mmu_range_notifier invalidate to signal that
+ * the mm refcount is zero and the range is no longer accessible.
  */
 enum mmu_notifier_event {
 	MMU_NOTIFY_UNMAP = 0,
@@ -39,6 +44,7 @@ enum mmu_notifier_event {
 	MMU_NOTIFY_PROTECTION_VMA,
 	MMU_NOTIFY_PROTECTION_PAGE,
 	MMU_NOTIFY_SOFT_DIRTY,
+	MMU_NOTIFY_RELEASE,
 };
 
 #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0)
@@ -222,6 +228,25 @@ struct mmu_notifier {
 	unsigned int users;
 };
 
+/**
+ * struct mmu_range_notifier_ops
+ * @invalidate: Upon return the caller must stop using any SPTEs within this
+ *              range, this function can sleep. Return false if blocking was
+ *              required but range is non-blocking
+ */
+struct mmu_range_notifier_ops {
+	bool (*invalidate)(struct mmu_range_notifier *mrn,
+			   const struct mmu_notifier_range *range);
+};
+
+struct mmu_range_notifier {
+	struct interval_tree_node interval_tree;
+	const struct mmu_range_notifier_ops *ops;
+	struct hlist_node deferred_item;
+	unsigned long invalidate_seq;
+	struct mm_struct *mm;
+};
+
 #ifdef CONFIG_MMU_NOTIFIER
 
 #ifdef CONFIG_LOCKDEP
@@ -263,6 +288,59 @@ extern int __mmu_notifier_register(struct mmu_notifier *mn,
 				   struct mm_struct *mm);
 extern void mmu_notifier_unregister(struct mmu_notifier *mn,
 				    struct mm_struct *mm);
+
+unsigned long mmu_range_read_begin(struct mmu_range_notifier *mrn);
+int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
+			      unsigned long start, unsigned long length,
+			      struct mm_struct *mm);
+int mmu_range_notifier_insert_locked(struct mmu_range_notifier *mrn,
+				     unsigned long start, unsigned long length,
+				     struct mm_struct *mm);
+void mmu_range_notifier_remove(struct mmu_range_notifier *mrn);
+
+/**
+ * mmu_range_read_retry - End a read side critical section against a VA range
+ * mrn: The range under lock
+ * seq: The return of the paired mmu_range_read_begin()
+ *
+ * This MUST be called under a user provided lock that is also held
+ * unconditionally by op->invalidate(). That lock provides the required SMP
+ * barrier for handling invalidate_seq.
+ *
+ * Each call should be paired with a single mmu_range_read_begin() and
+ * should be used to conclude the read side.
+ *
+ * Returns true if an invalidation collided with this critical section, and
+ * the caller should retry.
+ */
+static inline bool mmu_range_read_retry(struct mmu_range_notifier *mrn,
+					unsigned long seq)
+{
+	return READ_ONCE(mrn->invalidate_seq) != seq;
+}
+
+/**
+ * mmu_range_check_retry - Test if a collision has occurred
+ * mrn: The range under lock
+ * seq: The return of the matching mmu_range_read_begin()
+ *
+ * This can be used in the critical section between mmu_range_read_begin() and
+ * mmu_range_read_retry().  A return of true indicates an invalidation has
+ * collided with this lock and a future mmu_range_read_retry() will return
+ * true.
+ *
+ * False is not reliable and only suggests a collision has not happened. It
+ * can be called many times and does not have to hold the user provided lock.
+ *
+ * This call can be used as part of loops and other expensive operations to
+ * expedite a retry.
+ */
+static inline bool mmu_range_check_retry(struct mmu_range_notifier *mrn,
+					 unsigned long seq)
+{
+	return READ_ONCE(mrn->invalidate_seq) != seq;
+}
+
 extern void __mmu_notifier_mm_destroy(struct mm_struct *mm);
 extern void __mmu_notifier_release(struct mm_struct *mm);
 extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
diff --git a/mm/Kconfig b/mm/Kconfig
index a5dae9a7eb510a..d0b5046d9aeffd 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -284,6 +284,7 @@ config VIRT_TO_BUS
 config MMU_NOTIFIER
 	bool
 	select SRCU
+	select INTERVAL_TREE
 
 config KSM
 	bool "Enable KSM for page merging"
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 367670cfd02b7b..5e5e75ebcde4af 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -12,6 +12,7 @@
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/err.h>
+#include <linux/interval_tree.h>
 #include <linux/srcu.h>
 #include <linux/rcupdate.h>
 #include <linux/sched.h>
@@ -36,10 +37,243 @@ struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
 struct mmu_notifier_mm {
 	/* all mmu notifiers registered in this mm are queued in this list */
 	struct hlist_head list;
+	bool has_interval;
 	/* to serialize the list modifications and hlist_unhashed */
 	spinlock_t lock;
+	unsigned long invalidate_seq;
+	unsigned long active_invalidate_ranges;
+	struct rb_root_cached itree;
+	wait_queue_head_t wq;
+	struct hlist_head deferred_list;
 };
 
+/*
+ * This is a collision-retry read-side/write-side 'lock', a lot like a
+ * seqcount, however this allows multiple write-sides to hold it at
+ * once. Conceptually the write side is protecting the values of the PTEs in
+ * this mm, such that PTES cannot be read into SPTEs while any writer exists.
+ *
+ * Note that the core mm creates nested invalidate_range_start()/end() regions
+ * within the same thread, and runs invalidate_range_start()/end() in parallel
+ * on multiple CPUs. This is designed to not reduce concurrency or block
+ * progress on the mm side.
+ *
+ * As a secondary function, holding the full write side also serves to prevent
+ * writers for the itree, this is an optimization to avoid extra locking
+ * during invalidate_range_start/end notifiers.
+ *
+ * The write side has two states, fully excluded:
+ *  - mm->active_invalidate_ranges != 0
+ *  - mnn->invalidate_seq & 1 == True
+ *  - some range on the mm_struct is being invalidated
+ *  - the itree is not allowed to change
+ *
+ * And partially excluded:
+ *  - mm->active_invalidate_ranges != 0
+ *  - some range on the mm_struct is being invalidated
+ *  - the itree is allowed to change
+ *
+ * The later state avoids some expensive work on inv_end in the common case of
+ * no mrn monitoring the VA.
+ */
+static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
+{
+	lockdep_assert_held(&mmn_mm->lock);
+	return mmn_mm->invalidate_seq & 1;
+}
+
+static struct mmu_range_notifier *
+mn_itree_inv_start_range(struct mmu_notifier_mm *mmn_mm,
+			 const struct mmu_notifier_range *range,
+			 unsigned long *seq)
+{
+	struct interval_tree_node *node;
+	struct mmu_range_notifier *res = NULL;
+
+	spin_lock(&mmn_mm->lock);
+	mmn_mm->active_invalidate_ranges++;
+	node = interval_tree_iter_first(&mmn_mm->itree, range->start,
+					range->end - 1);
+	if (node) {
+		mmn_mm->invalidate_seq |= 1;
+		res = container_of(node, struct mmu_range_notifier,
+				   interval_tree);
+	}
+
+	*seq = mmn_mm->invalidate_seq;
+	spin_unlock(&mmn_mm->lock);
+	return res;
+}
+
+static struct mmu_range_notifier *
+mn_itree_inv_next(struct mmu_range_notifier *mrn,
+		  const struct mmu_notifier_range *range)
+{
+	struct interval_tree_node *node;
+
+	node = interval_tree_iter_next(&mrn->interval_tree, range->start,
+				       range->end - 1);
+	if (!node)
+		return NULL;
+	return container_of(node, struct mmu_range_notifier, interval_tree);
+}
+
+static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
+{
+	struct mmu_range_notifier *mrn;
+	struct hlist_node *next;
+	bool need_wake = false;
+
+	spin_lock(&mmn_mm->lock);
+	if (--mmn_mm->active_invalidate_ranges ||
+	    !mn_itree_is_invalidating(mmn_mm)) {
+		spin_unlock(&mmn_mm->lock);
+		return;
+	}
+
+	mmn_mm->invalidate_seq++;
+	need_wake = true;
+
+	/*
+	 * The inv_end incorporates a deferred mechanism like rtnl. Adds and
+	 * removes are queued until the final inv_end happens then they are
+	 * progressed. This arrangement for tree updates is used to avoid
+	 * using a blocking lock during invalidate_range_start.
+	 */
+	hlist_for_each_entry_safe(mrn, next, &mmn_mm->deferred_list,
+				  deferred_item) {
+		if (RB_EMPTY_NODE(&mrn->interval_tree.rb))
+			interval_tree_insert(&mrn->interval_tree,
+					     &mmn_mm->itree);
+		else
+			interval_tree_remove(&mrn->interval_tree,
+					     &mmn_mm->itree);
+		hlist_del(&mrn->deferred_item);
+	}
+	spin_unlock(&mmn_mm->lock);
+
+	/*
+	 * TODO: Since we already have a spinlock above, this would be faster
+	 * as wake_up_q
+	 */
+	if (need_wake)
+		wake_up_all(&mmn_mm->wq);
+}
+
+/**
+ * mmu_range_read_begin - Begin a read side critical section against a VA range
+ * mrn: The range to lock
+ *
+ * mmu_range_read_begin()/mmu_range_read_retry() implement a collision-retry
+ * locking scheme similar to seqcount for the VA range under mrn. If the mm
+ * invokes invalidation during the critical section then
+ * mmu_range_read_retry() will return true.
+ *
+ * This is useful to obtain shadow PTEs where teardown or setup of the SPTEs
+ * require a blocking context.  The critical region formed by this lock can
+ * sleep, and the required 'user_lock' can also be a sleeping lock.
+ *
+ * The caller is required to provide a 'user_lock' to serialize both teardown
+ * and setup.
+ *
+ * The return value should be passed to mmu_range_read_retry().
+ */
+unsigned long mmu_range_read_begin(struct mmu_range_notifier *mrn)
+{
+	struct mmu_notifier_mm *mmn_mm = mrn->mm->mmu_notifier_mm;
+	unsigned long seq;
+	bool is_invalidating;
+
+	/*
+	 * If the mrn has a different seq value under the user_lock than we
+	 * started with then it has collided.
+	 *
+	 * If the mrn currently has the same seq value as the mmn_mm seq, then
+	 * it is currently between invalidate_start/end and is colliding.
+	 *
+	 * The locking looks broadly like this:
+	 *   mn_tree_invalidate_start():          mmu_range_read_begin():
+	 *                                         spin_lock
+	 *                                          seq = READ_ONCE(mrn->invalidate_seq);
+	 *                                          seq == mmn_mm->invalidate_seq
+	 *                                         spin_unlock
+	 *    spin_lock
+	 *     seq = ++mmn_mm->invalidate_seq
+	 *    spin_unlock
+	 *    mrn->invalidate_seq = seq
+	 *     op->invalidate_range():
+	 *       user_lock
+	 *       user_unlock
+	 *
+	 *                          [Required: mmu_range_read_retry() == true]
+	 *
+	 *   mn_itree_inv_end():
+	 *    spin_lock
+	 *     seq = ++mmn_mm->invalidate_seq
+	 *    spin_unlock
+	 *
+	 *                                        user_lock
+	 *                                         mmu_range_read_retry():
+	 *                                          READ_ONCE(mrn->invalidate_seq) != seq
+	 *                                        user_unlock
+	 *
+	 * Logically mrn->invalidate_seq is locked under the user provided
+	 * lock, however the write is placed before that lock due to the way
+	 * the API is layered.
+	 *
+	 * Barriers are not needed as any races here are closed by an eventual
+	 * mmu_range_read_retry(), which provides a barrier via the user_lock.
+	 */
+	spin_lock(&mmn_mm->lock);
+	seq = READ_ONCE(mrn->invalidate_seq);
+	is_invalidating = seq == mmn_mm->invalidate_seq;
+	spin_unlock(&mmn_mm->lock);
+
+	/*
+	 * mrn->invalidate_seq is always set to an odd value. This ensures
+	 * that if seq does wrap we will always clear the below sleep in some
+	 * reasonable time as mmn_mm->invalidate_seq is even in the idle
+	 * state.
+	 */
+	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
+	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
+	if (is_invalidating)
+		wait_event(mmn_mm->wq,
+			   READ_ONCE(mmn_mm->invalidate_seq) != seq);
+
+	/*
+	 * Notice that mmu_range_read_retry() can already be true at this
+	 * point, avoiding loops here allows the user of this lock to provide
+	 * a global time bound.
+	 */
+
+	return seq;
+}
+EXPORT_SYMBOL_GPL(mmu_range_read_begin);
+
+static void mn_itree_release(struct mmu_notifier_mm *mmn_mm,
+			     struct mm_struct *mm)
+{
+	struct mmu_notifier_range range = {
+		.flags = MMU_NOTIFIER_RANGE_BLOCKABLE,
+		.event = MMU_NOTIFY_RELEASE,
+		.mm = mm,
+		.start = 0,
+		.end = ULONG_MAX,
+	};
+	struct mmu_range_notifier *mrn;
+	unsigned long cur_seq;
+	bool ret;
+
+	for (mrn = mn_itree_inv_start_range(mmn_mm, &range, &cur_seq); mrn;
+	     mrn = mn_itree_inv_next(mrn, &range)) {
+		ret = mrn->ops->invalidate(mrn, &range);
+		WARN_ON(ret);
+	}
+
+	mn_itree_inv_end(mmn_mm);
+}
+
 /*
  * This function can't run concurrently against mmu_notifier_register
  * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap
@@ -52,17 +286,24 @@ struct mmu_notifier_mm {
  * can't go away from under us as exit_mmap holds an mm_count pin
  * itself.
  */
-void __mmu_notifier_release(struct mm_struct *mm)
+static void mn_hlist_release(struct mmu_notifier_mm *mmn_mm,
+			     struct mm_struct *mm)
 {
 	struct mmu_notifier *mn;
 	int id;
 
+	if (mmn_mm->has_interval)
+		mn_itree_release(mmn_mm, mm);
+
+	if (hlist_empty(&mmn_mm->list))
+		return;
+
 	/*
 	 * SRCU here will block mmu_notifier_unregister until
 	 * ->release returns.
 	 */
 	id = srcu_read_lock(&srcu);
-	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist)
+	hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist)
 		/*
 		 * If ->release runs before mmu_notifier_unregister it must be
 		 * handled, as it's the only way for the driver to flush all
@@ -72,9 +313,9 @@ void __mmu_notifier_release(struct mm_struct *mm)
 		if (mn->ops->release)
 			mn->ops->release(mn, mm);
 
-	spin_lock(&mm->mmu_notifier_mm->lock);
-	while (unlikely(!hlist_empty(&mm->mmu_notifier_mm->list))) {
-		mn = hlist_entry(mm->mmu_notifier_mm->list.first,
+	spin_lock(&mmn_mm->lock);
+	while (unlikely(!hlist_empty(&mmn_mm->list))) {
+		mn = hlist_entry(mmn_mm->list.first,
 				 struct mmu_notifier,
 				 hlist);
 		/*
@@ -85,7 +326,7 @@ void __mmu_notifier_release(struct mm_struct *mm)
 		 */
 		hlist_del_init_rcu(&mn->hlist);
 	}
-	spin_unlock(&mm->mmu_notifier_mm->lock);
+	spin_unlock(&mmn_mm->lock);
 	srcu_read_unlock(&srcu, id);
 
 	/*
@@ -100,6 +341,17 @@ void __mmu_notifier_release(struct mm_struct *mm)
 	synchronize_srcu(&srcu);
 }
 
+void __mmu_notifier_release(struct mm_struct *mm)
+{
+	struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm;
+
+	if (mmn_mm->has_interval)
+		mn_itree_release(mmn_mm, mm);
+
+	if (!hlist_empty(&mmn_mm->list))
+		mn_hlist_release(mmn_mm, mm);
+}
+
 /*
  * If no young bitflag is supported by the hardware, ->clear_flush_young can
  * unmap the address and return 1 or 0 depending if the mapping previously
@@ -172,14 +424,41 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
 	srcu_read_unlock(&srcu, id);
 }
 
-int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
+static int mn_itree_invalidate(struct mmu_notifier_mm *mmn_mm,
+				     const struct mmu_notifier_range *range)
+{
+	struct mmu_range_notifier *mrn;
+	unsigned long cur_seq;
+
+	for (mrn = mn_itree_inv_start_range(mmn_mm, range, &cur_seq); mrn;
+	     mrn = mn_itree_inv_next(mrn, range)) {
+		bool ret;
+
+		WRITE_ONCE(mrn->invalidate_seq, cur_seq);
+		ret = mrn->ops->invalidate(mrn, range);
+		if (!ret && !WARN_ON(mmu_notifier_range_blockable(range)))
+			goto out_would_block;
+	}
+	return 0;
+
+out_would_block:
+	/*
+	 * On -EAGAIN the non-blocking caller is not allowed to call
+	 * invalidate_range_end()
+	 */
+	mn_itree_inv_end(mmn_mm);
+	return -EAGAIN;
+}
+
+static int mn_hlist_invalidate_range_start(struct mmu_notifier_mm *mmn_mm,
+					   struct mmu_notifier_range *range)
 {
 	struct mmu_notifier *mn;
 	int ret = 0;
 	int id;
 
 	id = srcu_read_lock(&srcu);
-	hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
+	hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) {
 		if (mn->ops->invalidate_range_start) {
 			int _ret;
 
@@ -203,15 +482,30 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
 	return ret;
 }
 
-void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
-					 bool only_end)
+int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
+{
+	struct mmu_notifier_mm *mmn_mm = range->mm->mmu_notifier_mm;
+	int ret = 0;
+
+	if (mmn_mm->has_interval) {
+		ret = mn_itree_invalidate(mmn_mm, range);
+		if (ret)
+			return ret;
+	}
+	if (!hlist_empty(&mmn_mm->list))
+		return mn_hlist_invalidate_range_start(mmn_mm, range);
+	return 0;
+}
+
+static void mn_hlist_invalidate_end(struct mmu_notifier_mm *mmn_mm,
+				    struct mmu_notifier_range *range,
+				    bool only_end)
 {
 	struct mmu_notifier *mn;
 	int id;
 
-	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
 	id = srcu_read_lock(&srcu);
-	hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
+	hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) {
 		/*
 		 * Call invalidate_range here too to avoid the need for the
 		 * subsystem of having to register an invalidate_range_end
@@ -238,6 +532,19 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
 		}
 	}
 	srcu_read_unlock(&srcu, id);
+}
+
+void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
+					 bool only_end)
+{
+	struct mmu_notifier_mm *mmn_mm = range->mm->mmu_notifier_mm;
+
+	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
+	if (mmn_mm->has_interval)
+		mn_itree_inv_end(mmn_mm);
+
+	if (!hlist_empty(&mmn_mm->list))
+		mn_hlist_invalidate_end(mmn_mm, range, only_end);
 	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
 }
 
@@ -256,8 +563,9 @@ void __mmu_notifier_invalidate_range(struct mm_struct *mm,
 }
 
 /*
- * Same as mmu_notifier_register but here the caller must hold the
- * mmap_sem in write mode.
+ * Same as mmu_notifier_register but here the caller must hold the mmap_sem in
+ * write mode. A NULL mn signals the notifier is being registered for itree
+ * mode.
  */
 int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
 {
@@ -274,9 +582,6 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
 		fs_reclaim_release(GFP_KERNEL);
 	}
 
-	mn->mm = mm;
-	mn->users = 1;
-
 	if (!mm->mmu_notifier_mm) {
 		/*
 		 * kmalloc cannot be called under mm_take_all_locks(), but we
@@ -284,21 +589,22 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
 		 * the write side of the mmap_sem.
 		 */
 		mmu_notifier_mm =
-			kmalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
+			kzalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
 		if (!mmu_notifier_mm)
 			return -ENOMEM;
 
 		INIT_HLIST_HEAD(&mmu_notifier_mm->list);
 		spin_lock_init(&mmu_notifier_mm->lock);
+		mmu_notifier_mm->invalidate_seq = 2;
+		mmu_notifier_mm->itree = RB_ROOT_CACHED;
+		init_waitqueue_head(&mmu_notifier_mm->wq);
+		INIT_HLIST_HEAD(&mmu_notifier_mm->deferred_list);
 	}
 
 	ret = mm_take_all_locks(mm);
 	if (unlikely(ret))
 		goto out_clean;
 
-	/* Pairs with the mmdrop in mmu_notifier_unregister_* */
-	mmgrab(mm);
-
 	/*
 	 * Serialize the update against mmu_notifier_unregister. A
 	 * side note: mmu_notifier_release can't run concurrently with
@@ -306,13 +612,26 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
 	 * current->mm or explicitly with get_task_mm() or similar).
 	 * We can't race against any other mmu notifier method either
 	 * thanks to mm_take_all_locks().
+	 *
+	 * release semantics are provided for users not inside a lock covered
+	 * by mm_take_all_locks(). acquire can only be used while holding the
+	 * mmgrab or mmget, and is safe because once created the
+	 * mmu_notififer_mm is not freed until the mm is destroyed.
 	 */
 	if (mmu_notifier_mm)
-		mm->mmu_notifier_mm = mmu_notifier_mm;
+		smp_store_release(&mm->mmu_notifier_mm, mmu_notifier_mm);
 
-	spin_lock(&mm->mmu_notifier_mm->lock);
-	hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list);
-	spin_unlock(&mm->mmu_notifier_mm->lock);
+	if (mn) {
+		/* Pairs with the mmdrop in mmu_notifier_unregister_* */
+		mmgrab(mm);
+		mn->mm = mm;
+		mn->users = 1;
+
+		spin_lock(&mm->mmu_notifier_mm->lock);
+		hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list);
+		spin_unlock(&mm->mmu_notifier_mm->lock);
+	} else
+		mm->mmu_notifier_mm->has_interval = true;
 
 	mm_drop_all_locks(mm);
 	BUG_ON(atomic_read(&mm->mm_users) <= 0);
@@ -529,6 +848,166 @@ void mmu_notifier_put(struct mmu_notifier *mn)
 }
 EXPORT_SYMBOL_GPL(mmu_notifier_put);
 
+static int __mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
+				       unsigned long start,
+				       unsigned long length,
+				       struct mmu_notifier_mm *mmn_mm,
+				       struct mm_struct *mm)
+{
+	mrn->mm = mm;
+	RB_CLEAR_NODE(&mrn->interval_tree.rb);
+	mrn->interval_tree.start = start;
+	/*
+	 * Note that the representation of the intervals in the interval tree
+	 * considers the ending point as contained in the interval.
+	 */
+	if (length == 0 ||
+	    check_add_overflow(start, length - 1, &mrn->interval_tree.last))
+		return -EOVERFLOW;
+
+	/* pairs with mmdrop in mmu_range_notifier_remove() */
+	mmgrab(mm);
+
+	/*
+	 * If some invalidate_range_start/end region is going on in parallel
+	 * we don't know what VA ranges are affected, so we must assume this
+	 * new range is included.
+	 *
+	 * If the itree is invalidating then we are not allowed to change
+	 * it. Retrying until invalidation is done is tricky due to the
+	 * possibility for live lock, instead defer the add to the unlock so
+	 * this algorithm is deterministic.
+	 *
+	 * In all cases the value for the mrn->mr_invalidate_seq should be
+	 * odd, see mmu_range_read_begin()
+	 */
+	spin_lock(&mmn_mm->lock);
+	if (mmn_mm->active_invalidate_ranges) {
+		if (mn_itree_is_invalidating(mmn_mm))
+			hlist_add_head(&mrn->deferred_item,
+				       &mmn_mm->deferred_list);
+		else {
+			mmn_mm->invalidate_seq |= 1;
+			interval_tree_insert(&mrn->interval_tree,
+					     &mmn_mm->itree);
+		}
+		mrn->invalidate_seq = mmn_mm->invalidate_seq;
+	} else {
+		WARN_ON(mn_itree_is_invalidating(mmn_mm));
+		mrn->invalidate_seq = mmn_mm->invalidate_seq - 1;
+		interval_tree_insert(&mrn->interval_tree, &mmn_mm->itree);
+	}
+	spin_unlock(&mmn_mm->lock);
+	return 0;
+}
+
+/**
+ * mmu_range_notifier_insert - Insert a range notifier
+ * @mrn: Range notifier to register
+ * @start: Starting virtual address to monitor
+ * @length: Length of the range to monitor
+ * @mm : mm_struct to attach to
+ *
+ * This function subscribes the range notifier for notifications from the mm.
+ * Upon return the ops related to mmu_range_notifier will be called whenever
+ * an event that intersects with the given range occurs.
+ *
+ * Upon return the range_notifier may not be present in the interval tree yet.
+ * The caller must use the normal range notifier locking flow via
+ * mmu_range_read_begin() to establish SPTEs for this range.
+ */
+int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
+			      unsigned long start, unsigned long length,
+			      struct mm_struct *mm)
+{
+	struct mmu_notifier_mm *mmn_mm;
+	int ret;
+
+	might_lock(&mm->mmap_sem);
+
+	mmn_mm = smp_load_acquire(&mm->mmu_notifier_mm);
+	if (!mmn_mm || !mmn_mm->has_interval) {
+		ret = mmu_notifier_register(NULL, mm);
+		if (ret)
+			return ret;
+		mmn_mm = mm->mmu_notifier_mm;
+	}
+	return __mmu_range_notifier_insert(mrn, start, length, mmn_mm, mm);
+}
+EXPORT_SYMBOL_GPL(mmu_range_notifier_insert);
+
+int mmu_range_notifier_insert_locked(struct mmu_range_notifier *mrn,
+				     unsigned long start, unsigned long length,
+				     struct mm_struct *mm)
+{
+	struct mmu_notifier_mm *mmn_mm;
+	int ret;
+
+	lockdep_assert_held_write(&mm->mmap_sem);
+
+	mmn_mm = mm->mmu_notifier_mm;
+	if (!mmn_mm || !mmn_mm->has_interval) {
+		ret = __mmu_notifier_register(NULL, mm);
+		if (ret)
+			return ret;
+		mmn_mm = mm->mmu_notifier_mm;
+	}
+	return __mmu_range_notifier_insert(mrn, start, length, mmn_mm, mm);
+}
+EXPORT_SYMBOL_GPL(mmu_range_notifier_insert_locked);
+
+/**
+ * mmu_range_notifier_remove - Remove a range notifier
+ * @mrn: Range notifier to unregister
+ *
+ * This function must be paired with mmu_range_notifier_insert(). It cannot be
+ * called from any ops callback.
+ *
+ * Once this returns ops callbacks are no longer running on other CPUs and
+ * will not be called in future.
+ */
+void mmu_range_notifier_remove(struct mmu_range_notifier *mrn)
+{
+	struct mm_struct *mm = mrn->mm;
+	struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm;
+	unsigned long seq = 0;
+
+	might_sleep();
+
+	spin_lock(&mmn_mm->lock);
+	if (mn_itree_is_invalidating(mmn_mm)) {
+		/*
+		 * remove is being called after insert put this on the
+		 * deferred list, but before the deferred list was processed.
+		 */
+		if (RB_EMPTY_NODE(&mrn->interval_tree.rb)) {
+			hlist_del(&mrn->deferred_item);
+		} else {
+			hlist_add_head(&mrn->deferred_item,
+				       &mmn_mm->deferred_list);
+			seq = mmn_mm->invalidate_seq;
+		}
+	} else {
+		WARN_ON(RB_EMPTY_NODE(&mrn->interval_tree.rb));
+		interval_tree_remove(&mrn->interval_tree, &mmn_mm->itree);
+	}
+	spin_unlock(&mmn_mm->lock);
+
+	/*
+	 * The possible sleep on progress in the invalidation requires the
+	 * caller not hold any locks held by invalidation callbacks.
+	 */
+	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
+	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
+	if (seq)
+		wait_event(mmn_mm->wq,
+			   READ_ONCE(mmn_mm->invalidate_seq) != seq);
+
+	/* pairs with mmgrab in mmu_range_notifier_insert() */
+	mmdrop(mm);
+}
+EXPORT_SYMBOL_GPL(mmu_range_notifier_remove);
+
 /**
  * mmu_notifier_synchronize - Ensure all mmu_notifiers are freed
  *
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-21 18:33   ` Jerome Glisse
  2019-10-15 18:12 ` [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

From: Jason Gunthorpe <jgg@mellanox.com>

hmm_mirror's handling of ranges does not use a sequence count which
results in this bug:

         CPU0                                   CPU1
                                     hmm_range_wait_until_valid(range)
                                         valid == true
                                     hmm_range_fault(range)
hmm_invalidate_range_start()
   range->valid = false
hmm_invalidate_range_end()
   range->valid = true
                                     hmm_range_valid(range)
                                          valid == true

Where the hmm_range_valid should not have succeeded.

Adding the required sequence count would make it nearly identical to the
new mmu_range_notifier. Instead replace the hmm_mirror stuff with
mmu_range_notifier.

Co-existence of the two APIs is the first step.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 include/linux/hmm.h |  5 +++++
 mm/hmm.c            | 25 +++++++++++++++++++------
 2 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 3fec513b9c00f1..8ac1fd6a81af8f 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -145,6 +145,9 @@ enum hmm_pfn_value_e {
 /*
  * struct hmm_range - track invalidation lock on virtual address range
  *
+ * @notifier: an optional mmu_range_notifier
+ * @notifier_seq: when notifier is used this is the result of
+ *                mmu_range_read_begin()
  * @hmm: the core HMM structure this range is active against
  * @vma: the vm area struct for the range
  * @list: all range lock are on a list
@@ -159,6 +162,8 @@ enum hmm_pfn_value_e {
  * @valid: pfns array did not change since it has been fill by an HMM function
  */
 struct hmm_range {
+	struct mmu_range_notifier *notifier;
+	unsigned long		notifier_seq;
 	struct hmm		*hmm;
 	struct list_head	list;
 	unsigned long		start;
diff --git a/mm/hmm.c b/mm/hmm.c
index 902f5fa6bf93ad..22ac3595771feb 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -852,6 +852,14 @@ void hmm_range_unregister(struct hmm_range *range)
 }
 EXPORT_SYMBOL(hmm_range_unregister);
 
+static bool needs_retry(struct hmm_range *range)
+{
+	if (range->notifier)
+		return mmu_range_check_retry(range->notifier,
+					     range->notifier_seq);
+	return !range->valid;
+}
+
 static const struct mm_walk_ops hmm_walk_ops = {
 	.pud_entry	= hmm_vma_walk_pud,
 	.pmd_entry	= hmm_vma_walk_pmd,
@@ -892,18 +900,23 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
 	const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP;
 	unsigned long start = range->start, end;
 	struct hmm_vma_walk hmm_vma_walk;
-	struct hmm *hmm = range->hmm;
+	struct mm_struct *mm;
 	struct vm_area_struct *vma;
 	int ret;
 
-	lockdep_assert_held(&hmm->mmu_notifier.mm->mmap_sem);
+	if (range->notifier)
+		mm = range->notifier->mm;
+	else
+		mm = range->hmm->mmu_notifier.mm;
+
+	lockdep_assert_held(&mm->mmap_sem);
 
 	do {
 		/* If range is no longer valid force retry. */
-		if (!range->valid)
+		if (needs_retry(range))
 			return -EBUSY;
 
-		vma = find_vma(hmm->mmu_notifier.mm, start);
+		vma = find_vma(mm, start);
 		if (vma == NULL || (vma->vm_flags & device_vma))
 			return -EFAULT;
 
@@ -933,7 +946,7 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
 			start = hmm_vma_walk.last;
 
 			/* Keep trying while the range is valid. */
-		} while (ret == -EBUSY && range->valid);
+		} while (ret == -EBUSY && !needs_retry(range));
 
 		if (ret) {
 			unsigned long i;
@@ -991,7 +1004,7 @@ long hmm_range_dma_map(struct hmm_range *range, struct device *device,
 			continue;
 
 		/* Check if range is being invalidated */
-		if (!range->valid) {
+		if (needs_retry(range)) {
 			ret = -EBUSY;
 			goto unmap;
 		}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (2 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-21 18:31   ` Jerome Glisse
  2019-10-15 18:12 ` [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert() Jason Gunthorpe
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

From: Jason Gunthorpe <jgg@mellanox.com>

Only the function calls are stubbed out with static inlines that always
fail. This is the standard way to write a header for an optional component
and makes it easier for drivers that only optionally need HMM_MIRROR.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 include/linux/hmm.h | 59 ++++++++++++++++++++++++++++++++++++---------
 kernel/fork.c       |  1 -
 2 files changed, 47 insertions(+), 13 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 8ac1fd6a81af8f..2666eb08a40615 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -62,8 +62,6 @@
 #include <linux/kconfig.h>
 #include <asm/pgtable.h>
 
-#ifdef CONFIG_HMM_MIRROR
-
 #include <linux/device.h>
 #include <linux/migrate.h>
 #include <linux/memremap.h>
@@ -374,6 +372,15 @@ struct hmm_mirror {
 	struct list_head		list;
 };
 
+/*
+ * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
+ */
+#define HMM_FAULT_ALLOW_RETRY		(1 << 0)
+
+/* Don't fault in missing PTEs, just snapshot the current state. */
+#define HMM_FAULT_SNAPSHOT		(1 << 1)
+
+#ifdef CONFIG_HMM_MIRROR
 int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm);
 void hmm_mirror_unregister(struct hmm_mirror *mirror);
 
@@ -383,14 +390,6 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror);
 int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror);
 void hmm_range_unregister(struct hmm_range *range);
 
-/*
- * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
- */
-#define HMM_FAULT_ALLOW_RETRY		(1 << 0)
-
-/* Don't fault in missing PTEs, just snapshot the current state. */
-#define HMM_FAULT_SNAPSHOT		(1 << 1)
-
 long hmm_range_fault(struct hmm_range *range, unsigned int flags);
 
 long hmm_range_dma_map(struct hmm_range *range,
@@ -401,6 +400,44 @@ long hmm_range_dma_unmap(struct hmm_range *range,
 			 struct device *device,
 			 dma_addr_t *daddrs,
 			 bool dirty);
+#else
+int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm)
+{
+	return -EOPNOTSUPP;
+}
+
+void hmm_mirror_unregister(struct hmm_mirror *mirror)
+{
+}
+
+int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror)
+{
+	return -EOPNOTSUPP;
+}
+
+void hmm_range_unregister(struct hmm_range *range)
+{
+}
+
+static inline long hmm_range_fault(struct hmm_range *range, unsigned int flags)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline long hmm_range_dma_map(struct hmm_range *range,
+				     struct device *device, dma_addr_t *daddrs,
+				     unsigned int flags)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline long hmm_range_dma_unmap(struct hmm_range *range,
+				       struct device *device,
+				       dma_addr_t *daddrs, bool dirty)
+{
+	return -EOPNOTSUPP;
+}
+#endif
 
 /*
  * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range
@@ -411,6 +448,4 @@ long hmm_range_dma_unmap(struct hmm_range *range,
  */
 #define HMM_RANGE_DEFAULT_TIMEOUT 1000
 
-#endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
-
 #endif /* LINUX_HMM_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index f9572f41612628..4561a65d19db88 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -40,7 +40,6 @@
 #include <linux/binfmts.h>
 #include <linux/mman.h>
 #include <linux/mmu_notifier.h>
-#include <linux/hmm.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/vmacache.h>
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert()
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (3 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-11-04 20:25   ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv Jason Gunthorpe
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

From: Jason Gunthorpe <jgg@mellanox.com>

Replace the internal interval tree based mmu notifier with the new common
mmu_range_notifier_insert() API. This removes a lot of code and fixes a
deadlock that can be triggered in ODP:

 zap_page_range()
  mmu_notifier_invalidate_range_start()
   [..]
    ib_umem_notifier_invalidate_range_start()
       down_read(&per_mm->umem_rwsem)
  unmap_single_vma()
    [..]
      __split_huge_page_pmd()
        mmu_notifier_invalidate_range_start()
        [..]
           ib_umem_notifier_invalidate_range_start()
              down_read(&per_mm->umem_rwsem)   // DEADLOCK

        mmu_notifier_invalidate_range_end()
           up_read(&per_mm->umem_rwsem)
  mmu_notifier_invalidate_range_end()
     up_read(&per_mm->umem_rwsem)

The umem_rwsem is held across the range_start/end as the ODP algorithm for
invalidate_range_end cannot tolerate changes to the interval
tree. However, due to the nested invalidation regions the second
down_read() can deadlock if there are competing writers. The new core code
provides an alternative scheme to solve this problem.

Fixes: ca748c39ea3f ("RDMA/umem: Get rid of per_mm->notifier_count")
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/infiniband/core/device.c     |   1 -
 drivers/infiniband/core/umem_odp.c   | 288 +++------------------------
 drivers/infiniband/hw/mlx5/mlx5_ib.h |   7 +-
 drivers/infiniband/hw/mlx5/mr.c      |   3 +-
 drivers/infiniband/hw/mlx5/odp.c     |  48 ++---
 include/rdma/ib_umem_odp.h           |  65 ++----
 include/rdma/ib_verbs.h              |   2 -
 7 files changed, 67 insertions(+), 347 deletions(-)

diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index 2dd2cfe9b56136..ac7924b3c73abe 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -2617,7 +2617,6 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
 	SET_DEVICE_OP(dev_ops, get_vf_config);
 	SET_DEVICE_OP(dev_ops, get_vf_stats);
 	SET_DEVICE_OP(dev_ops, init_port);
-	SET_DEVICE_OP(dev_ops, invalidate_range);
 	SET_DEVICE_OP(dev_ops, iw_accept);
 	SET_DEVICE_OP(dev_ops, iw_add_ref);
 	SET_DEVICE_OP(dev_ops, iw_connect);
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index d7d5fadf0899ad..6132b8127e8435 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -48,197 +48,32 @@
 
 #include "uverbs.h"
 
-static void ib_umem_notifier_start_account(struct ib_umem_odp *umem_odp)
-{
-	mutex_lock(&umem_odp->umem_mutex);
-	if (umem_odp->notifiers_count++ == 0)
-		/*
-		 * Initialize the completion object for waiting on
-		 * notifiers. Since notifier_count is zero, no one should be
-		 * waiting right now.
-		 */
-		reinit_completion(&umem_odp->notifier_completion);
-	mutex_unlock(&umem_odp->umem_mutex);
-}
-
-static void ib_umem_notifier_end_account(struct ib_umem_odp *umem_odp)
-{
-	mutex_lock(&umem_odp->umem_mutex);
-	/*
-	 * This sequence increase will notify the QP page fault that the page
-	 * that is going to be mapped in the spte could have been freed.
-	 */
-	++umem_odp->notifiers_seq;
-	if (--umem_odp->notifiers_count == 0)
-		complete_all(&umem_odp->notifier_completion);
-	mutex_unlock(&umem_odp->umem_mutex);
-}
-
-static void ib_umem_notifier_release(struct mmu_notifier *mn,
-				     struct mm_struct *mm)
-{
-	struct ib_ucontext_per_mm *per_mm =
-		container_of(mn, struct ib_ucontext_per_mm, mn);
-	struct rb_node *node;
-
-	down_read(&per_mm->umem_rwsem);
-	if (!per_mm->mn.users)
-		goto out;
-
-	for (node = rb_first_cached(&per_mm->umem_tree); node;
-	     node = rb_next(node)) {
-		struct ib_umem_odp *umem_odp =
-			rb_entry(node, struct ib_umem_odp, interval_tree.rb);
-
-		/*
-		 * Increase the number of notifiers running, to prevent any
-		 * further fault handling on this MR.
-		 */
-		ib_umem_notifier_start_account(umem_odp);
-		complete_all(&umem_odp->notifier_completion);
-		umem_odp->umem.ibdev->ops.invalidate_range(
-			umem_odp, ib_umem_start(umem_odp),
-			ib_umem_end(umem_odp));
-	}
-
-out:
-	up_read(&per_mm->umem_rwsem);
-}
-
-static int invalidate_range_start_trampoline(struct ib_umem_odp *item,
-					     u64 start, u64 end, void *cookie)
-{
-	ib_umem_notifier_start_account(item);
-	item->umem.ibdev->ops.invalidate_range(item, start, end);
-	return 0;
-}
-
-static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
-				const struct mmu_notifier_range *range)
-{
-	struct ib_ucontext_per_mm *per_mm =
-		container_of(mn, struct ib_ucontext_per_mm, mn);
-	int rc;
-
-	if (mmu_notifier_range_blockable(range))
-		down_read(&per_mm->umem_rwsem);
-	else if (!down_read_trylock(&per_mm->umem_rwsem))
-		return -EAGAIN;
-
-	if (!per_mm->mn.users) {
-		up_read(&per_mm->umem_rwsem);
-		/*
-		 * At this point users is permanently zero and visible to this
-		 * CPU without a lock, that fact is relied on to skip the unlock
-		 * in range_end.
-		 */
-		return 0;
-	}
-
-	rc = rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start,
-					   range->end,
-					   invalidate_range_start_trampoline,
-					   mmu_notifier_range_blockable(range),
-					   NULL);
-	if (rc)
-		up_read(&per_mm->umem_rwsem);
-	return rc;
-}
-
-static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start,
-					   u64 end, void *cookie)
-{
-	ib_umem_notifier_end_account(item);
-	return 0;
-}
-
-static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn,
-				const struct mmu_notifier_range *range)
-{
-	struct ib_ucontext_per_mm *per_mm =
-		container_of(mn, struct ib_ucontext_per_mm, mn);
-
-	if (unlikely(!per_mm->mn.users))
-		return;
-
-	rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start,
-				      range->end,
-				      invalidate_range_end_trampoline, true, NULL);
-	up_read(&per_mm->umem_rwsem);
-}
-
-static struct mmu_notifier *ib_umem_alloc_notifier(struct mm_struct *mm)
-{
-	struct ib_ucontext_per_mm *per_mm;
-
-	per_mm = kzalloc(sizeof(*per_mm), GFP_KERNEL);
-	if (!per_mm)
-		return ERR_PTR(-ENOMEM);
-
-	per_mm->umem_tree = RB_ROOT_CACHED;
-	init_rwsem(&per_mm->umem_rwsem);
-
-	WARN_ON(mm != current->mm);
-	rcu_read_lock();
-	per_mm->tgid = get_task_pid(current->group_leader, PIDTYPE_PID);
-	rcu_read_unlock();
-	return &per_mm->mn;
-}
-
-static void ib_umem_free_notifier(struct mmu_notifier *mn)
-{
-	struct ib_ucontext_per_mm *per_mm =
-		container_of(mn, struct ib_ucontext_per_mm, mn);
-
-	WARN_ON(!RB_EMPTY_ROOT(&per_mm->umem_tree.rb_root));
-
-	put_pid(per_mm->tgid);
-	kfree(per_mm);
-}
-
-static const struct mmu_notifier_ops ib_umem_notifiers = {
-	.release                    = ib_umem_notifier_release,
-	.invalidate_range_start     = ib_umem_notifier_invalidate_range_start,
-	.invalidate_range_end       = ib_umem_notifier_invalidate_range_end,
-	.alloc_notifier		    = ib_umem_alloc_notifier,
-	.free_notifier		    = ib_umem_free_notifier,
-};
-
 static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp)
 {
-	struct ib_ucontext_per_mm *per_mm;
-	struct mmu_notifier *mn;
 	int ret;
 
 	umem_odp->umem.is_odp = 1;
+	mutex_init(&umem_odp->umem_mutex);
+
 	if (!umem_odp->is_implicit_odp) {
 		size_t page_size = 1UL << umem_odp->page_shift;
+		unsigned long start;
+		unsigned long end;
 		size_t pages;
 
-		umem_odp->interval_tree.start =
-			ALIGN_DOWN(umem_odp->umem.address, page_size);
+		start = ALIGN_DOWN(umem_odp->umem.address, page_size);
 		if (check_add_overflow(umem_odp->umem.address,
 				       (unsigned long)umem_odp->umem.length,
-				       &umem_odp->interval_tree.last))
+				       &end))
 			return -EOVERFLOW;
-		umem_odp->interval_tree.last =
-			ALIGN(umem_odp->interval_tree.last, page_size);
-		if (unlikely(umem_odp->interval_tree.last < page_size))
+		end = ALIGN(end, page_size);
+		if (unlikely(end < page_size))
 			return -EOVERFLOW;
 
-		pages = (umem_odp->interval_tree.last -
-			 umem_odp->interval_tree.start) >>
-			umem_odp->page_shift;
+		pages = (end - start) >> umem_odp->page_shift;
 		if (!pages)
 			return -EINVAL;
 
-		/*
-		 * Note that the representation of the intervals in the
-		 * interval tree considers the ending point as contained in
-		 * the interval.
-		 */
-		umem_odp->interval_tree.last--;
-
 		umem_odp->page_list = kvcalloc(
 			pages, sizeof(*umem_odp->page_list), GFP_KERNEL);
 		if (!umem_odp->page_list)
@@ -250,26 +85,15 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp)
 			ret = -ENOMEM;
 			goto out_page_list;
 		}
-	}
 
-	mn = mmu_notifier_get(&ib_umem_notifiers, umem_odp->umem.owning_mm);
-	if (IS_ERR(mn)) {
-		ret = PTR_ERR(mn);
-		goto out_dma_list;
-	}
-	umem_odp->per_mm = per_mm =
-		container_of(mn, struct ib_ucontext_per_mm, mn);
-
-	mutex_init(&umem_odp->umem_mutex);
-	init_completion(&umem_odp->notifier_completion);
+		ret = mmu_range_notifier_insert(&umem_odp->notifier, start,
+						end - start, current->mm);
+		if (ret)
+			goto out_dma_list;
 
-	if (!umem_odp->is_implicit_odp) {
-		down_write(&per_mm->umem_rwsem);
-		interval_tree_insert(&umem_odp->interval_tree,
-				     &per_mm->umem_tree);
-		up_write(&per_mm->umem_rwsem);
+		umem_odp->tgid =
+			get_task_pid(current->group_leader, PIDTYPE_PID);
 	}
-	mmgrab(umem_odp->umem.owning_mm);
 
 	return 0;
 
@@ -290,8 +114,8 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp)
  * @udata: udata from the syscall being used to create the umem
  * @access: ib_reg_mr access flags
  */
-struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata,
-					       int access)
+struct ib_umem_odp *
+ib_umem_odp_alloc_implicit(struct ib_udata *udata, int access)
 {
 	struct ib_ucontext *context =
 		container_of(udata, struct uverbs_attr_bundle, driver_udata)
@@ -305,8 +129,6 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata,
 
 	if (!context)
 		return ERR_PTR(-EIO);
-	if (WARN_ON_ONCE(!context->device->ops.invalidate_range))
-		return ERR_PTR(-EINVAL);
 
 	umem_odp = kzalloc(sizeof(*umem_odp), GFP_KERNEL);
 	if (!umem_odp)
@@ -336,8 +158,9 @@ EXPORT_SYMBOL(ib_umem_odp_alloc_implicit);
  * @addr: The starting userspace VA
  * @size: The length of the userspace VA
  */
-struct ib_umem_odp *ib_umem_odp_alloc_child(struct ib_umem_odp *root,
-					    unsigned long addr, size_t size)
+struct ib_umem_odp *
+ib_umem_odp_alloc_child(struct ib_umem_odp *root, unsigned long addr,
+			size_t size, const struct mmu_range_notifier_ops *ops)
 {
 	/*
 	 * Caller must ensure that root cannot be freed during the call to
@@ -360,6 +183,7 @@ struct ib_umem_odp *ib_umem_odp_alloc_child(struct ib_umem_odp *root,
 	umem->writable   = root->umem.writable;
 	umem->owning_mm  = root->umem.owning_mm;
 	odp_data->page_shift = PAGE_SHIFT;
+	odp_data->notifier.ops = ops;
 
 	ret = ib_init_umem_odp(odp_data);
 	if (ret) {
@@ -383,7 +207,8 @@ EXPORT_SYMBOL(ib_umem_odp_alloc_child);
  * conjunction with MMU notifiers.
  */
 struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr,
-				    size_t size, int access)
+				    size_t size, int access,
+				    const struct mmu_range_notifier_ops *ops)
 {
 	struct ib_umem_odp *umem_odp;
 	struct ib_ucontext *context;
@@ -398,8 +223,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr,
 	if (!context)
 		return ERR_PTR(-EIO);
 
-	if (WARN_ON_ONCE(!(access & IB_ACCESS_ON_DEMAND)) ||
-	    WARN_ON_ONCE(!context->device->ops.invalidate_range))
+	if (WARN_ON_ONCE(!(access & IB_ACCESS_ON_DEMAND)))
 		return ERR_PTR(-EINVAL);
 
 	umem_odp = kzalloc(sizeof(struct ib_umem_odp), GFP_KERNEL);
@@ -411,6 +235,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr,
 	umem_odp->umem.address = addr;
 	umem_odp->umem.writable = ib_access_writable(access);
 	umem_odp->umem.owning_mm = mm = current->mm;
+	umem_odp->notifier.ops = ops;
 
 	umem_odp->page_shift = PAGE_SHIFT;
 	if (access & IB_ACCESS_HUGETLB) {
@@ -442,8 +267,6 @@ EXPORT_SYMBOL(ib_umem_odp_get);
 
 void ib_umem_odp_release(struct ib_umem_odp *umem_odp)
 {
-	struct ib_ucontext_per_mm *per_mm = umem_odp->per_mm;
-
 	/*
 	 * Ensure that no more pages are mapped in the umem.
 	 *
@@ -455,28 +278,11 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp)
 		ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp),
 					    ib_umem_end(umem_odp));
 		mutex_unlock(&umem_odp->umem_mutex);
+		mmu_range_notifier_remove(&umem_odp->notifier);
 		kvfree(umem_odp->dma_list);
 		kvfree(umem_odp->page_list);
+		put_pid(umem_odp->tgid);
 	}
-
-	down_write(&per_mm->umem_rwsem);
-	if (!umem_odp->is_implicit_odp) {
-		interval_tree_remove(&umem_odp->interval_tree,
-				     &per_mm->umem_tree);
-		complete_all(&umem_odp->notifier_completion);
-	}
-	/*
-	 * NOTE! mmu_notifier_unregister() can happen between a start/end
-	 * callback, resulting in a missing end, and thus an unbalanced
-	 * lock. This doesn't really matter to us since we are about to kfree
-	 * the memory that holds the lock, however LOCKDEP doesn't like this.
-	 * Thus we call the mmu_notifier_put under the rwsem and test the
-	 * internal users count to reliably see if we are past this point.
-	 */
-	mmu_notifier_put(&per_mm->mn);
-	up_write(&per_mm->umem_rwsem);
-
-	mmdrop(umem_odp->umem.owning_mm);
 	kfree(umem_odp);
 }
 EXPORT_SYMBOL(ib_umem_odp_release);
@@ -501,7 +307,7 @@ EXPORT_SYMBOL(ib_umem_odp_release);
  */
 static int ib_umem_odp_map_dma_single_page(
 		struct ib_umem_odp *umem_odp,
-		int page_index,
+		unsigned int page_index,
 		struct page *page,
 		u64 access_mask,
 		unsigned long current_seq)
@@ -510,12 +316,7 @@ static int ib_umem_odp_map_dma_single_page(
 	dma_addr_t dma_addr;
 	int ret = 0;
 
-	/*
-	 * Note: we avoid writing if seq is different from the initial seq, to
-	 * handle case of a racing notifier. This check also allows us to bail
-	 * early if we have a notifier running in parallel with us.
-	 */
-	if (ib_umem_mmu_notifier_retry(umem_odp, current_seq)) {
+	if (mmu_range_check_retry(&umem_odp->notifier, current_seq)) {
 		ret = -EAGAIN;
 		goto out;
 	}
@@ -618,7 +419,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
 	 * existing beyond the lifetime of the originating process.. Presumably
 	 * mmget_not_zero will fail in this case.
 	 */
-	owning_process = get_pid_task(umem_odp->per_mm->tgid, PIDTYPE_PID);
+	owning_process = get_pid_task(umem_odp->tgid, PIDTYPE_PID);
 	if (!owning_process || !mmget_not_zero(owning_mm)) {
 		ret = -EINVAL;
 		goto out_put_task;
@@ -762,32 +563,3 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt,
 	}
 }
 EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages);
-
-/* @last is not a part of the interval. See comment for function
- * node_last.
- */
-int rbt_ib_umem_for_each_in_range(struct rb_root_cached *root,
-				  u64 start, u64 last,
-				  umem_call_back cb,
-				  bool blockable,
-				  void *cookie)
-{
-	int ret_val = 0;
-	struct interval_tree_node *node, *next;
-	struct ib_umem_odp *umem;
-
-	if (unlikely(start == last))
-		return ret_val;
-
-	for (node = interval_tree_iter_first(root, start, last - 1);
-			node; node = next) {
-		/* TODO move the blockable decision up to the callback */
-		if (!blockable)
-			return -EAGAIN;
-		next = interval_tree_iter_next(node, start, last - 1);
-		umem = container_of(node, struct ib_umem_odp, interval_tree);
-		ret_val = cb(umem, start, last, cookie) || ret_val;
-	}
-
-	return ret_val;
-}
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index f61d4005c6c379..c719f08b351670 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1263,8 +1263,6 @@ int mlx5_ib_odp_init_one(struct mlx5_ib_dev *ibdev);
 void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev);
 int __init mlx5_ib_odp_init(void);
 void mlx5_ib_odp_cleanup(void);
-void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long start,
-			      unsigned long end);
 void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent);
 void mlx5_odp_populate_klm(struct mlx5_klm *pklm, size_t offset,
 			   size_t nentries, struct mlx5_ib_mr *mr, int flags);
@@ -1294,11 +1292,10 @@ mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
 {
 	return -EOPNOTSUPP;
 }
-static inline void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp,
-					    unsigned long start,
-					    unsigned long end){};
 #endif /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
 
+extern const struct mmu_range_notifier_ops mlx5_mn_ops;
+
 /* Needed for rep profile */
 void __mlx5_ib_remove(struct mlx5_ib_dev *dev,
 		      const struct mlx5_ib_profile *profile,
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 199f7959aaa510..fbe31830b22807 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -743,7 +743,8 @@ static int mr_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 	if (access_flags & IB_ACCESS_ON_DEMAND) {
 		struct ib_umem_odp *odp;
 
-		odp = ib_umem_odp_get(udata, start, length, access_flags);
+		odp = ib_umem_odp_get(udata, start, length, access_flags,
+				      &mlx5_mn_ops);
 		if (IS_ERR(odp)) {
 			mlx5_ib_dbg(dev, "umem get failed (%ld)\n",
 				    PTR_ERR(odp));
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index fd2306aff78ad7..98b1009c5c0e00 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -246,16 +246,23 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
 	xa_unlock(&imr->implicit_children);
 }
 
-void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long start,
-			      unsigned long end)
+static bool mlx5_ib_invalidate_range(struct mmu_range_notifier *mrn,
+				     const struct mmu_notifier_range *range)
 {
+	struct ib_umem_odp *umem_odp =
+		container_of(mrn, struct ib_umem_odp, notifier);
 	struct mlx5_ib_mr *mr;
 	const u64 umr_block_mask = (MLX5_UMR_MTT_ALIGNMENT /
 				    sizeof(struct mlx5_mtt)) - 1;
 	u64 idx = 0, blk_start_idx = 0;
+	unsigned long start;
+	unsigned long end;
 	int in_block = 0;
 	u64 addr;
 
+	if (!mmu_notifier_range_blockable(range))
+		return false;
+
 	mutex_lock(&umem_odp->umem_mutex);
 	/*
 	 * If npages is zero then umem_odp->private may not be setup yet. This
@@ -265,8 +272,8 @@ void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long start,
 		goto out;
 	mr = umem_odp->private;
 
-	start = max_t(u64, ib_umem_start(umem_odp), start);
-	end = min_t(u64, ib_umem_end(umem_odp), end);
+	start = max_t(u64, ib_umem_start(umem_odp), range->start);
+	end = min_t(u64, ib_umem_end(umem_odp), range->end);
 
 	/*
 	 * Iteration one - zap the HW's MTTs. The notifiers_count ensures that
@@ -317,8 +324,13 @@ void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long start,
 		destroy_unused_implicit_child_mr(mr);
 out:
 	mutex_unlock(&umem_odp->umem_mutex);
+	return true;
 }
 
+const struct mmu_range_notifier_ops mlx5_mn_ops = {
+	.invalidate = mlx5_ib_invalidate_range,
+};
+
 void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
 {
 	struct ib_odp_caps *caps = &dev->odp_caps;
@@ -419,7 +431,7 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr,
 
 	odp = ib_umem_odp_alloc_child(to_ib_umem_odp(imr->umem),
 				      idx * MLX5_IMR_MTT_SIZE,
-				      MLX5_IMR_MTT_SIZE);
+				      MLX5_IMR_MTT_SIZE, &mlx5_mn_ops);
 	if (IS_ERR(odp))
 		return ERR_CAST(odp);
 
@@ -605,8 +617,9 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 			     u64 user_va, size_t bcnt, u32 *bytes_mapped,
 			     u32 flags)
 {
-	int current_seq, page_shift, ret, np;
+	int page_shift, ret, np;
 	bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
+	unsigned long current_seq;
 	u64 access_mask;
 	u64 start_idx, page_mask;
 
@@ -618,12 +631,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	if (odp->umem.writable && !downgrade)
 		access_mask |= ODP_WRITE_ALLOWED_BIT;
 
-	current_seq = READ_ONCE(odp->notifiers_seq);
-	/*
-	 * Ensure the sequence number is valid for some time before we call
-	 * gup.
-	 */
-	smp_rmb();
+	current_seq = mmu_range_read_begin(&odp->notifier);
 
 	np = ib_umem_odp_map_dma_pages(odp, user_va, bcnt, access_mask,
 				       current_seq);
@@ -631,7 +639,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 		return np;
 
 	mutex_lock(&odp->umem_mutex);
-	if (!ib_umem_mmu_notifier_retry(odp, current_seq)) {
+	if (!mmu_range_read_retry(&odp->notifier, current_seq)) {
 		/*
 		 * No need to check whether the MTTs really belong to
 		 * this MR, since ib_umem_odp_map_dma_pages already
@@ -661,19 +669,6 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	return np << (page_shift - PAGE_SHIFT);
 
 out:
-	if (ret == -EAGAIN) {
-		unsigned long timeout = msecs_to_jiffies(MMU_NOTIFIER_TIMEOUT);
-
-		if (!wait_for_completion_timeout(&odp->notifier_completion,
-						 timeout)) {
-			mlx5_ib_warn(
-				mr->dev,
-				"timeout waiting for mmu notifier. seq %d against %d. notifiers_count=%d\n",
-				current_seq, odp->notifiers_seq,
-				odp->notifiers_count);
-		}
-	}
-
 	return ret;
 }
 
@@ -1614,7 +1609,6 @@ void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent)
 
 static const struct ib_device_ops mlx5_ib_dev_odp_ops = {
 	.advise_mr = mlx5_ib_advise_mr,
-	.invalidate_range = mlx5_ib_invalidate_range,
 };
 
 int mlx5_ib_odp_init_one(struct mlx5_ib_dev *dev)
diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h
index 09b0e4494986a9..98ed5435afccd9 100644
--- a/include/rdma/ib_umem_odp.h
+++ b/include/rdma/ib_umem_odp.h
@@ -35,11 +35,11 @@
 
 #include <rdma/ib_umem.h>
 #include <rdma/ib_verbs.h>
-#include <linux/interval_tree.h>
 
 struct ib_umem_odp {
 	struct ib_umem umem;
-	struct ib_ucontext_per_mm *per_mm;
+	struct mmu_range_notifier notifier;
+	struct pid *tgid;
 
 	/*
 	 * An array of the pages included in the on-demand paging umem.
@@ -62,13 +62,8 @@ struct ib_umem_odp {
 	struct mutex		umem_mutex;
 	void			*private; /* for the HW driver to use. */
 
-	int notifiers_seq;
-	int notifiers_count;
 	int npages;
 
-	/* Tree tracking */
-	struct interval_tree_node interval_tree;
-
 	/*
 	 * An implicit odp umem cannot be DMA mapped, has 0 length, and serves
 	 * only as an anchor for the driver to hold onto the per_mm. FIXME:
@@ -77,7 +72,6 @@ struct ib_umem_odp {
 	 */
 	bool is_implicit_odp;
 
-	struct completion	notifier_completion;
 	unsigned int		page_shift;
 };
 
@@ -89,13 +83,13 @@ static inline struct ib_umem_odp *to_ib_umem_odp(struct ib_umem *umem)
 /* Returns the first page of an ODP umem. */
 static inline unsigned long ib_umem_start(struct ib_umem_odp *umem_odp)
 {
-	return umem_odp->interval_tree.start;
+	return umem_odp->notifier.interval_tree.start;
 }
 
 /* Returns the address of the page after the last one of an ODP umem. */
 static inline unsigned long ib_umem_end(struct ib_umem_odp *umem_odp)
 {
-	return umem_odp->interval_tree.last + 1;
+	return umem_odp->notifier.interval_tree.last + 1;
 }
 
 static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp)
@@ -119,21 +113,14 @@ static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp)
 
 #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
 
-struct ib_ucontext_per_mm {
-	struct mmu_notifier mn;
-	struct pid *tgid;
-
-	struct rb_root_cached umem_tree;
-	/* Protects umem_tree */
-	struct rw_semaphore umem_rwsem;
-};
-
 struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr,
-				    size_t size, int access);
+				    size_t size, int access,
+				    const struct mmu_range_notifier_ops *ops);
 struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata,
 					       int access);
-struct ib_umem_odp *ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem,
-					    unsigned long addr, size_t size);
+struct ib_umem_odp *
+ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, unsigned long addr,
+			size_t size, const struct mmu_range_notifier_ops *ops);
 void ib_umem_odp_release(struct ib_umem_odp *umem_odp);
 
 int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset,
@@ -143,39 +130,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset,
 void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset,
 				 u64 bound);
 
-typedef int (*umem_call_back)(struct ib_umem_odp *item, u64 start, u64 end,
-			      void *cookie);
-/*
- * Call the callback on each ib_umem in the range. Returns the logical or of
- * the return values of the functions called.
- */
-int rbt_ib_umem_for_each_in_range(struct rb_root_cached *root,
-				  u64 start, u64 end,
-				  umem_call_back cb,
-				  bool blockable, void *cookie);
-
-static inline int ib_umem_mmu_notifier_retry(struct ib_umem_odp *umem_odp,
-					     unsigned long mmu_seq)
-{
-	/*
-	 * This code is strongly based on the KVM code from
-	 * mmu_notifier_retry. Should be called with
-	 * the relevant locks taken (umem_odp->umem_mutex
-	 * and the ucontext umem_mutex semaphore locked for read).
-	 */
-
-	if (unlikely(umem_odp->notifiers_count))
-		return 1;
-	if (umem_odp->notifiers_seq != mmu_seq)
-		return 1;
-	return 0;
-}
-
 #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
 
-static inline struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata,
-						  unsigned long addr,
-						  size_t size, int access)
+static inline struct ib_umem_odp *
+ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, size_t size,
+		int access, const struct mmu_range_notifier_ops *ops)
 {
 	return ERR_PTR(-EINVAL);
 }
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 6a47ba85c54c11..2c30c859ae0d13 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -2422,8 +2422,6 @@ struct ib_device_ops {
 			    u64 iova);
 	int (*unmap_fmr)(struct list_head *fmr_list);
 	int (*dealloc_fmr)(struct ib_fmr *fmr);
-	void (*invalidate_range)(struct ib_umem_odp *umem_odp,
-				 unsigned long start, unsigned long end);
 	int (*attach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid);
 	int (*detach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid);
 	struct ib_xrcd *(*alloc_xrcd)(struct ib_device *device,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (4 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert() Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-29 12:15   ` Dennis Dalessandro
  2019-10-15 18:12 ` [PATCH hmm 07/15] drm/radeon: use mmu_range_notifier_insert Jason Gunthorpe
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Mike Marciniszyn,
	Dennis Dalessandro

From: Jason Gunthorpe <jgg@mellanox.com>

This converts one of the two users of mmu_notifiers to use the new API.
The conversion is fairly straightforward, however the existing use of
notifiers here seems to be racey.

Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/infiniband/hw/hfi1/file_ops.c     |   2 +-
 drivers/infiniband/hw/hfi1/hfi.h          |   2 +-
 drivers/infiniband/hw/hfi1/user_exp_rcv.c | 144 ++++++++--------------
 drivers/infiniband/hw/hfi1/user_exp_rcv.h |   3 +-
 4 files changed, 58 insertions(+), 93 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index f9a7e9d29c8ba2..7c5e3fb224139a 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -1138,7 +1138,7 @@ static int get_ctxt_info(struct hfi1_filedata *fd, unsigned long arg, u32 len)
 			HFI1_CAP_UGET_MASK(uctxt->flags, MASK) |
 			HFI1_CAP_KGET_MASK(uctxt->flags, K2U);
 	/* adjust flag if this fd is not able to cache */
-	if (!fd->handler)
+	if (!fd->use_mn)
 		cinfo.runtime_flags |= HFI1_CAP_TID_UNMAP; /* no caching */
 
 	cinfo.num_active = hfi1_count_active_units();
diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
index fa45350a9a1d32..fc10d65fc3e13c 100644
--- a/drivers/infiniband/hw/hfi1/hfi.h
+++ b/drivers/infiniband/hw/hfi1/hfi.h
@@ -1444,7 +1444,7 @@ struct hfi1_filedata {
 	/* for cpu affinity; -1 if none */
 	int rec_cpu_num;
 	u32 tid_n_pinned;
-	struct mmu_rb_handler *handler;
+	bool use_mn;
 	struct tid_rb_node **entry_to_rb;
 	spinlock_t tid_lock; /* protect tid_[limit,used] counters */
 	u32 tid_limit;
diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.c b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
index 3592a9ec155e85..2aca28a0b3ca47 100644
--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
@@ -59,11 +59,10 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
 			      struct tid_user_buf *tbuf,
 			      u32 rcventry, struct tid_group *grp,
 			      u16 pageidx, unsigned int npages);
-static int tid_rb_insert(void *arg, struct mmu_rb_node *node);
 static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
 				    struct tid_rb_node *tnode);
-static void tid_rb_remove(void *arg, struct mmu_rb_node *node);
-static int tid_rb_invalidate(void *arg, struct mmu_rb_node *mnode);
+static bool tid_rb_invalidate(struct mmu_range_notifier *mrn,
+			      const struct mmu_notifier_range *range);
 static int program_rcvarray(struct hfi1_filedata *fd, struct tid_user_buf *,
 			    struct tid_group *grp,
 			    unsigned int start, u16 count,
@@ -73,10 +72,8 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
 			      struct tid_group **grp);
 static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node);
 
-static struct mmu_rb_ops tid_rb_ops = {
-	.insert = tid_rb_insert,
-	.remove = tid_rb_remove,
-	.invalidate = tid_rb_invalidate
+static const struct mmu_range_notifier_ops tid_mn_ops = {
+	.invalidate = tid_rb_invalidate,
 };
 
 /*
@@ -87,7 +84,6 @@ static struct mmu_rb_ops tid_rb_ops = {
 int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd,
 			   struct hfi1_ctxtdata *uctxt)
 {
-	struct hfi1_devdata *dd = uctxt->dd;
 	int ret = 0;
 
 	spin_lock_init(&fd->tid_lock);
@@ -109,20 +105,7 @@ int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd,
 			fd->entry_to_rb = NULL;
 			return -ENOMEM;
 		}
-
-		/*
-		 * Register MMU notifier callbacks. If the registration
-		 * fails, continue without TID caching for this context.
-		 */
-		ret = hfi1_mmu_rb_register(fd, fd->mm, &tid_rb_ops,
-					   dd->pport->hfi1_wq,
-					   &fd->handler);
-		if (ret) {
-			dd_dev_info(dd,
-				    "Failed MMU notifier registration %d\n",
-				    ret);
-			ret = 0;
-		}
+		fd->use_mn = true;
 	}
 
 	/*
@@ -139,7 +122,7 @@ int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd,
 	 * init.
 	 */
 	spin_lock(&fd->tid_lock);
-	if (uctxt->subctxt_cnt && fd->handler) {
+	if (uctxt->subctxt_cnt && fd->use_mn) {
 		u16 remainder;
 
 		fd->tid_limit = uctxt->expected_count / uctxt->subctxt_cnt;
@@ -158,18 +141,10 @@ void hfi1_user_exp_rcv_free(struct hfi1_filedata *fd)
 {
 	struct hfi1_ctxtdata *uctxt = fd->uctxt;
 
-	/*
-	 * The notifier would have been removed when the process'es mm
-	 * was freed.
-	 */
-	if (fd->handler) {
-		hfi1_mmu_rb_unregister(fd->handler);
-	} else {
-		if (!EXP_TID_SET_EMPTY(uctxt->tid_full_list))
-			unlock_exp_tids(uctxt, &uctxt->tid_full_list, fd);
-		if (!EXP_TID_SET_EMPTY(uctxt->tid_used_list))
-			unlock_exp_tids(uctxt, &uctxt->tid_used_list, fd);
-	}
+	if (!EXP_TID_SET_EMPTY(uctxt->tid_full_list))
+		unlock_exp_tids(uctxt, &uctxt->tid_full_list, fd);
+	if (!EXP_TID_SET_EMPTY(uctxt->tid_used_list))
+		unlock_exp_tids(uctxt, &uctxt->tid_used_list, fd);
 
 	kfree(fd->invalid_tids);
 	fd->invalid_tids = NULL;
@@ -201,7 +176,7 @@ static void unpin_rcv_pages(struct hfi1_filedata *fd,
 
 	if (mapped) {
 		pci_unmap_single(dd->pcidev, node->dma_addr,
-				 node->mmu.len, PCI_DMA_FROMDEVICE);
+				 node->npages * PAGE_SIZE, PCI_DMA_FROMDEVICE);
 		pages = &node->pages[idx];
 	} else {
 		pages = &tidbuf->pages[idx];
@@ -777,8 +752,8 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
 		return -EFAULT;
 	}
 
-	node->mmu.addr = tbuf->vaddr + (pageidx * PAGE_SIZE);
-	node->mmu.len = npages * PAGE_SIZE;
+	node->notifier.ops = &tid_mn_ops;
+	node->fdata = fd;
 	node->phys = page_to_phys(pages[0]);
 	node->npages = npages;
 	node->rcventry = rcventry;
@@ -787,23 +762,34 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
 	node->freed = false;
 	memcpy(node->pages, pages, sizeof(struct page *) * npages);
 
-	if (!fd->handler)
-		ret = tid_rb_insert(fd, &node->mmu);
-	else
-		ret = hfi1_mmu_rb_insert(fd->handler, &node->mmu);
-
-	if (ret) {
-		hfi1_cdbg(TID, "Failed to insert RB node %u 0x%lx, 0x%lx %d",
-			  node->rcventry, node->mmu.addr, node->phys, ret);
-		pci_unmap_single(dd->pcidev, phys, npages * PAGE_SIZE,
-				 PCI_DMA_FROMDEVICE);
-		kfree(node);
-		return -EFAULT;
+	if (fd->use_mn) {
+		ret = mmu_range_notifier_insert(
+			&node->notifier, tbuf->vaddr + (pageidx * PAGE_SIZE),
+			npages * PAGE_SIZE, fd->mm);
+		if (ret)
+			goto out_unmap;
+		/*
+		 * FIXME: This is in the wrong order, the notifier should be
+		 * established before the pages are pinned by pin_rcv_pages.
+		 */
+		mmu_range_read_begin(&node->notifier);
 	}
+	fd->entry_to_rb[node->rcventry - uctxt->expected_base] = node;
+
 	hfi1_put_tid(dd, rcventry, PT_EXPECTED, phys, ilog2(npages) + 1);
 	trace_hfi1_exp_tid_reg(uctxt->ctxt, fd->subctxt, rcventry, npages,
-			       node->mmu.addr, node->phys, phys);
+			       node->notifier.interval_tree.start, node->phys,
+			       phys);
 	return 0;
+
+out_unmap:
+	hfi1_cdbg(TID, "Failed to insert RB node %u 0x%lx, 0x%lx %d",
+		  node->rcventry, node->notifier.interval_tree.start,
+		  node->phys, ret);
+	pci_unmap_single(dd->pcidev, phys, npages * PAGE_SIZE,
+			 PCI_DMA_FROMDEVICE);
+	kfree(node);
+	return -EFAULT;
 }
 
 static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
@@ -833,10 +819,9 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
 	if (grp)
 		*grp = node->grp;
 
-	if (!fd->handler)
-		cacheless_tid_rb_remove(fd, node);
-	else
-		hfi1_mmu_rb_remove(fd->handler, &node->mmu);
+	if (fd->use_mn)
+		mmu_range_notifier_remove(&node->notifier);
+	cacheless_tid_rb_remove(fd, node);
 
 	return 0;
 }
@@ -847,7 +832,8 @@ static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
 	struct hfi1_devdata *dd = uctxt->dd;
 
 	trace_hfi1_exp_tid_unreg(uctxt->ctxt, fd->subctxt, node->rcventry,
-				 node->npages, node->mmu.addr, node->phys,
+				 node->npages,
+				 node->notifier.interval_tree.start, node->phys,
 				 node->dma_addr);
 
 	/*
@@ -894,30 +880,28 @@ static void unlock_exp_tids(struct hfi1_ctxtdata *uctxt,
 				if (!node || node->rcventry != rcventry)
 					continue;
 
+				if (fd->use_mn)
+					mmu_range_notifier_remove(
+						&node->notifier);
 				cacheless_tid_rb_remove(fd, node);
 			}
 		}
 	}
 }
 
-/*
- * Always return 0 from this function.  A non-zero return indicates that the
- * remove operation will be called and that memory should be unpinned.
- * However, the driver cannot unpin out from under PSM.  Instead, retain the
- * memory (by returning 0) and inform PSM that the memory is going away.  PSM
- * will call back later when it has removed the memory from its list.
- */
-static int tid_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
+static bool tid_rb_invalidate(struct mmu_range_notifier *mrn,
+			      const struct mmu_notifier_range *range)
 {
-	struct hfi1_filedata *fdata = arg;
-	struct hfi1_ctxtdata *uctxt = fdata->uctxt;
 	struct tid_rb_node *node =
-		container_of(mnode, struct tid_rb_node, mmu);
+		container_of(mrn, struct tid_rb_node, notifier);
+	struct hfi1_filedata *fdata = node->fdata;
+	struct hfi1_ctxtdata *uctxt = fdata->uctxt;
 
 	if (node->freed)
-		return 0;
+		return true;
 
-	trace_hfi1_exp_tid_inval(uctxt->ctxt, fdata->subctxt, node->mmu.addr,
+	trace_hfi1_exp_tid_inval(uctxt->ctxt, fdata->subctxt,
+				 node->notifier.interval_tree.start,
 				 node->rcventry, node->npages, node->dma_addr);
 	node->freed = true;
 
@@ -946,18 +930,7 @@ static int tid_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
 		fdata->invalid_tid_idx++;
 	}
 	spin_unlock(&fdata->invalid_lock);
-	return 0;
-}
-
-static int tid_rb_insert(void *arg, struct mmu_rb_node *node)
-{
-	struct hfi1_filedata *fdata = arg;
-	struct tid_rb_node *tnode =
-		container_of(node, struct tid_rb_node, mmu);
-	u32 base = fdata->uctxt->expected_base;
-
-	fdata->entry_to_rb[tnode->rcventry - base] = tnode;
-	return 0;
+	return true;
 }
 
 static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
@@ -968,12 +941,3 @@ static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
 	fdata->entry_to_rb[tnode->rcventry - base] = NULL;
 	clear_tid_node(fdata, tnode);
 }
-
-static void tid_rb_remove(void *arg, struct mmu_rb_node *node)
-{
-	struct hfi1_filedata *fdata = arg;
-	struct tid_rb_node *tnode =
-		container_of(node, struct tid_rb_node, mmu);
-
-	cacheless_tid_rb_remove(fdata, tnode);
-}
diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.h b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
index 43b105de1d5427..b5314db083b125 100644
--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.h
+++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
@@ -65,7 +65,8 @@ struct tid_user_buf {
 };
 
 struct tid_rb_node {
-	struct mmu_rb_node mmu;
+	struct mmu_range_notifier notifier;
+	struct hfi1_filedata *fdata;
 	unsigned long phys;
 	struct tid_group *grp;
 	u32 rcventry;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 07/15] drm/radeon: use mmu_range_notifier_insert
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (5 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER Jason Gunthorpe
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Alex Deucher, Christian König,
	David Zhou, Petr Cvek

From: Jason Gunthorpe <jgg@mellanox.com>

The new API is an exact match for the needs of radeon.

For some reason radeon tries to remove overlapping ranges from the
interval tree, but interval trees (and mmu_range_notifier_insert)
support overlapping ranges directly. Simply delete all this code.

Since this driver is missing a invalidate_range_end callback, but
still calls get_user_pages(), it cannot be correct against all races.

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: David (ChunMing) Zhou <David1.Zhou@amd.com>
Cc: amd-gfx@lists.freedesktop.org
Cc: Petr Cvek <petrcvekcz@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/gpu/drm/radeon/radeon.h    |   9 +-
 drivers/gpu/drm/radeon/radeon_mn.c | 218 ++++++-----------------------
 2 files changed, 51 insertions(+), 176 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index d59b004f669583..27959f3ace1152 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -68,6 +68,10 @@
 #include <linux/hashtable.h>
 #include <linux/dma-fence.h>
 
+#ifdef CONFIG_MMU_NOTIFIER
+#include <linux/mmu_notifier.h>
+#endif
+
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 #include <drm/ttm/ttm_placement.h>
@@ -509,8 +513,9 @@ struct radeon_bo {
 	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	pid_t				pid;
 
-	struct radeon_mn		*mn;
-	struct list_head		mn_list;
+#ifdef CONFIG_MMU_NOTIFIER
+	struct mmu_range_notifier	notifier;
+#endif
 };
 #define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, tbo.base)
 
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
index dbab9a3a969b9e..2ea90874c535f5 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -36,131 +36,50 @@
 
 #include "radeon.h"
 
-struct radeon_mn {
-	struct mmu_notifier	mn;
-
-	/* objects protected by lock */
-	struct mutex		lock;
-	struct rb_root_cached	objects;
-};
-
-struct radeon_mn_node {
-	struct interval_tree_node	it;
-	struct list_head		bos;
-};
-
 /**
- * radeon_mn_invalidate_range_start - callback to notify about mm change
+ * radeon_mn_invalidate - callback to notify about mm change
  *
  * @mn: our notifier
- * @mn: the mm this callback is about
- * @start: start of updated range
- * @end: end of updated range
+ * @range: the VMA under invalidation
  *
  * We block for all BOs between start and end to be idle and
  * unmap them by move them into system domain again.
  */
-static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn,
-				const struct mmu_notifier_range *range)
+static bool radeon_mn_invalidate(struct mmu_range_notifier *mn,
+				 const struct mmu_notifier_range *range)
 {
-	struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn);
+	struct radeon_bo *bo = container_of(mn, struct radeon_bo, notifier);
 	struct ttm_operation_ctx ctx = { false, false };
-	struct interval_tree_node *it;
-	unsigned long end;
-	int ret = 0;
-
-	/* notification is exclusive, but interval is inclusive */
-	end = range->end - 1;
-
-	/* TODO we should be able to split locking for interval tree and
-	 * the tear down.
-	 */
-	if (mmu_notifier_range_blockable(range))
-		mutex_lock(&rmn->lock);
-	else if (!mutex_trylock(&rmn->lock))
-		return -EAGAIN;
-
-	it = interval_tree_iter_first(&rmn->objects, range->start, end);
-	while (it) {
-		struct radeon_mn_node *node;
-		struct radeon_bo *bo;
-		long r;
-
-		if (!mmu_notifier_range_blockable(range)) {
-			ret = -EAGAIN;
-			goto out_unlock;
-		}
-
-		node = container_of(it, struct radeon_mn_node, it);
-		it = interval_tree_iter_next(it, range->start, end);
+	long r;
 
-		list_for_each_entry(bo, &node->bos, mn_list) {
+	if (!bo->tbo.ttm || bo->tbo.ttm->state != tt_bound)
+		return true;
 
-			if (!bo->tbo.ttm || bo->tbo.ttm->state != tt_bound)
-				continue;
+	if (!mmu_notifier_range_blockable(range))
+		return false;
 
-			r = radeon_bo_reserve(bo, true);
-			if (r) {
-				DRM_ERROR("(%ld) failed to reserve user bo\n", r);
-				continue;
-			}
-
-			r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
-				true, false, MAX_SCHEDULE_TIMEOUT);
-			if (r <= 0)
-				DRM_ERROR("(%ld) failed to wait for user bo\n", r);
-
-			radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
-			r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
-			if (r)
-				DRM_ERROR("(%ld) failed to validate user bo\n", r);
-
-			radeon_bo_unreserve(bo);
-		}
+	r = radeon_bo_reserve(bo, true);
+	if (r) {
+		DRM_ERROR("(%ld) failed to reserve user bo\n", r);
+		return true;
 	}
-	
-out_unlock:
-	mutex_unlock(&rmn->lock);
-
-	return ret;
-}
-
-static void radeon_mn_release(struct mmu_notifier *mn, struct mm_struct *mm)
-{
-	struct mmu_notifier_range range = {
-		.mm = mm,
-		.start = 0,
-		.end = ULONG_MAX,
-		.flags = 0,
-		.event = MMU_NOTIFY_UNMAP,
-	};
-
-	radeon_mn_invalidate_range_start(mn, &range);
-}
-
-static struct mmu_notifier *radeon_mn_alloc_notifier(struct mm_struct *mm)
-{
-	struct radeon_mn *rmn;
 
-	rmn = kzalloc(sizeof(*rmn), GFP_KERNEL);
-	if (!rmn)
-		return ERR_PTR(-ENOMEM);
+	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
+				      MAX_SCHEDULE_TIMEOUT);
+	if (r <= 0)
+		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
 
-	mutex_init(&rmn->lock);
-	rmn->objects = RB_ROOT_CACHED;
-	return &rmn->mn;
-}
+	radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
+	r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+	if (r)
+		DRM_ERROR("(%ld) failed to validate user bo\n", r);
 
-static void radeon_mn_free_notifier(struct mmu_notifier *mn)
-{
-	kfree(container_of(mn, struct radeon_mn, mn));
+	radeon_bo_unreserve(bo);
+	return true;
 }
 
-static const struct mmu_notifier_ops radeon_mn_ops = {
-	.release = radeon_mn_release,
-	.invalidate_range_start = radeon_mn_invalidate_range_start,
-	.alloc_notifier = radeon_mn_alloc_notifier,
-	.free_notifier = radeon_mn_free_notifier,
+static const struct mmu_range_notifier_ops radeon_mn_ops = {
+	.invalidate = radeon_mn_invalidate,
 };
 
 /**
@@ -174,51 +93,21 @@ static const struct mmu_notifier_ops radeon_mn_ops = {
  */
 int radeon_mn_register(struct radeon_bo *bo, unsigned long addr)
 {
-	unsigned long end = addr + radeon_bo_size(bo) - 1;
-	struct mmu_notifier *mn;
-	struct radeon_mn *rmn;
-	struct radeon_mn_node *node = NULL;
-	struct list_head bos;
-	struct interval_tree_node *it;
-
-	mn = mmu_notifier_get(&radeon_mn_ops, current->mm);
-	if (IS_ERR(mn))
-		return PTR_ERR(mn);
-	rmn = container_of(mn, struct radeon_mn, mn);
-
-	INIT_LIST_HEAD(&bos);
-
-	mutex_lock(&rmn->lock);
-
-	while ((it = interval_tree_iter_first(&rmn->objects, addr, end))) {
-		kfree(node);
-		node = container_of(it, struct radeon_mn_node, it);
-		interval_tree_remove(&node->it, &rmn->objects);
-		addr = min(it->start, addr);
-		end = max(it->last, end);
-		list_splice(&node->bos, &bos);
-	}
-
-	if (!node) {
-		node = kmalloc(sizeof(struct radeon_mn_node), GFP_KERNEL);
-		if (!node) {
-			mutex_unlock(&rmn->lock);
-			return -ENOMEM;
-		}
-	}
-
-	bo->mn = rmn;
-
-	node->it.start = addr;
-	node->it.last = end;
-	INIT_LIST_HEAD(&node->bos);
-	list_splice(&bos, &node->bos);
-	list_add(&bo->mn_list, &node->bos);
-
-	interval_tree_insert(&node->it, &rmn->objects);
-
-	mutex_unlock(&rmn->lock);
-
+	int ret;
+
+	bo->notifier.ops = &radeon_mn_ops;
+	ret = mmu_range_notifier_insert(&bo->notifier, addr, radeon_bo_size(bo),
+					current->mm);
+	if (ret)
+		return ret;
+
+	/*
+	 * FIXME: radeon appears to allow get_user_pages to run during
+	 * invalidate_range_start/end, which is not a safe way to read the
+	 * PTEs. It should use the mmu_range_read_begin() scheme around the
+	 * get_user_pages to ensure that the PTEs are read properly
+	 */
+	mmu_range_read_begin(&bo->notifier);
 	return 0;
 }
 
@@ -231,27 +120,8 @@ int radeon_mn_register(struct radeon_bo *bo, unsigned long addr)
  */
 void radeon_mn_unregister(struct radeon_bo *bo)
 {
-	struct radeon_mn *rmn = bo->mn;
-	struct list_head *head;
-
-	if (!rmn)
+	if (!bo->notifier.mm)
 		return;
-
-	mutex_lock(&rmn->lock);
-	/* save the next list entry for later */
-	head = bo->mn_list.next;
-
-	list_del(&bo->mn_list);
-
-	if (list_empty(head)) {
-		struct radeon_mn_node *node;
-		node = container_of(head, struct radeon_mn_node, bos);
-		interval_tree_remove(&node->it, &rmn->objects);
-		kfree(node);
-	}
-
-	mutex_unlock(&rmn->lock);
-
-	mmu_notifier_put(&rmn->mn);
-	bo->mn = NULL;
+	mmu_range_notifier_remove(&bo->notifier);
+	bo->notifier.mm = NULL;
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (6 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 07/15] drm/radeon: use mmu_range_notifier_insert Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-16  5:11   ` Jürgen Groß
  2019-10-15 18:12 ` [PATCH hmm 09/15] xen/gntdev: use mmu_range_notifier_insert Jason Gunthorpe
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Oleksandr Andrushchenko,
	Boris Ostrovsky, xen-devel, Juergen Gross, Stefano Stabellini

From: Jason Gunthorpe <jgg@mellanox.com>

DMA_SHARED_BUFFER can not be enabled by the user (it represents a library
set in the kernel). The kconfig convention is to use select for such
symbols so they are turned on implicitly when the user enables a kconfig
that needs them.

Otherwise the XEN_GNTDEV_DMABUF kconfig is overly difficult to enable.

Fixes: 932d6562179e ("xen/gntdev: Add initial support for dma-buf UAPI")
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/xen/Kconfig | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 79cc75096f4232..a50dadd0109336 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -141,7 +141,8 @@ config XEN_GNTDEV
 
 config XEN_GNTDEV_DMABUF
 	bool "Add support for dma-buf grant access device driver extension"
-	depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC && DMA_SHARED_BUFFER
+	depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC
+	select DMA_SHARED_BUFFER
 	help
 	  Allows userspace processes and kernel modules to use Xen backed
 	  dma-buf implementation. With this extension grant references to
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 09/15] xen/gntdev: use mmu_range_notifier_insert
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (7 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start Jason Gunthorpe
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Oleksandr Andrushchenko,
	Boris Ostrovsky, xen-devel, Juergen Gross, Stefano Stabellini

From: Jason Gunthorpe <jgg@mellanox.com>

gntdev simply wants to monitor a specific VMA for any notifier events,
this can be done straightforwardly using mmu_range_notifier_insert() over
the VMA's VA range.

The notifier should be attached until the original VMA is destroyed.

It is unclear if any of this is even sane, but at least a lot of duplicate
code is removed.

Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/xen/gntdev-common.h |   8 +-
 drivers/xen/gntdev.c        | 179 ++++++++++--------------------------
 2 files changed, 48 insertions(+), 139 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 2f8b949c3eeb14..b201fdd20b667b 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -21,15 +21,8 @@ struct gntdev_dmabuf_priv;
 struct gntdev_priv {
 	/* Maps with visible offsets in the file descriptor. */
 	struct list_head maps;
-	/*
-	 * Maps that are not visible; will be freed on munmap.
-	 * Only populated if populate_freeable_maps == 1
-	 */
-	struct list_head freeable_maps;
 	/* lock protects maps and freeable_maps. */
 	struct mutex lock;
-	struct mm_struct *mm;
-	struct mmu_notifier mn;
 
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 	/* Device for which DMA memory is allocated. */
@@ -49,6 +42,7 @@ struct gntdev_unmap_notify {
 };
 
 struct gntdev_grant_map {
+	struct mmu_range_notifier notifier;
 	struct list_head next;
 	struct vm_area_struct *vma;
 	int index;
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index a446a7221e13e9..234153f556afd6 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -65,7 +65,6 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may be mapped by "
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
 static int use_ptemod;
-#define populate_freeable_maps use_ptemod
 
 static int unmap_grant_pages(struct gntdev_grant_map *map,
 			     int offset, int pages);
@@ -251,12 +250,6 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
 		evtchn_put(map->notify.event);
 	}
 
-	if (populate_freeable_maps && priv) {
-		mutex_lock(&priv->lock);
-		list_del(&map->next);
-		mutex_unlock(&priv->lock);
-	}
-
 	if (map->pages && !use_ptemod)
 		unmap_grant_pages(map, 0, map->count);
 	gntdev_free_map(map);
@@ -445,17 +438,9 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
 	struct gntdev_priv *priv = file->private_data;
 
 	pr_debug("gntdev_vma_close %p\n", vma);
-	if (use_ptemod) {
-		/* It is possible that an mmu notifier could be running
-		 * concurrently, so take priv->lock to ensure that the vma won't
-		 * vanishing during the unmap_grant_pages call, since we will
-		 * spin here until that completes. Such a concurrent call will
-		 * not do any unmapping, since that has been done prior to
-		 * closing the vma, but it may still iterate the unmap_ops list.
-		 */
-		mutex_lock(&priv->lock);
+	if (use_ptemod && map->vma == vma) {
+		mmu_range_notifier_remove(&map->notifier);
 		map->vma = NULL;
-		mutex_unlock(&priv->lock);
 	}
 	vma->vm_private_data = NULL;
 	gntdev_put_map(priv, map);
@@ -477,109 +462,43 @@ static const struct vm_operations_struct gntdev_vmops = {
 
 /* ------------------------------------------------------------------ */
 
-static bool in_range(struct gntdev_grant_map *map,
-			      unsigned long start, unsigned long end)
-{
-	if (!map->vma)
-		return false;
-	if (map->vma->vm_start >= end)
-		return false;
-	if (map->vma->vm_end <= start)
-		return false;
-
-	return true;
-}
-
-static int unmap_if_in_range(struct gntdev_grant_map *map,
-			      unsigned long start, unsigned long end,
-			      bool blockable)
+static bool gntdev_invalidate(struct mmu_range_notifier *mn,
+			      const struct mmu_notifier_range *range)
 {
+	struct gntdev_grant_map *map =
+		container_of(mn, struct gntdev_grant_map, notifier);
 	unsigned long mstart, mend;
 	int err;
 
-	if (!in_range(map, start, end))
-		return 0;
+	if (!mmu_notifier_range_blockable(range))
+		return false;
 
-	if (!blockable)
-		return -EAGAIN;
+	/*
+	 * If the VMA is split or otherwise changed the notifier is not
+	 * updated, but we don't want to process VA's outside the modified
+	 * VMA. FIXME: It would be much more understandable to just prevent
+	 * modifying the VMA in the first place.
+	 */
+	if (map->vma->vm_start >= range->end ||
+	    map->vma->vm_end <= range->start)
+		return true;
 
-	mstart = max(start, map->vma->vm_start);
-	mend   = min(end,   map->vma->vm_end);
+	mstart = max(range->start, map->vma->vm_start);
+	mend = min(range->end, map->vma->vm_end);
 	pr_debug("map %d+%d (%lx %lx), range %lx %lx, mrange %lx %lx\n",
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
-			start, end, mstart, mend);
+			range->start, range->end, mstart, mend);
 	err = unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
 	WARN_ON(err);
 
-	return 0;
-}
-
-static int mn_invl_range_start(struct mmu_notifier *mn,
-			       const struct mmu_notifier_range *range)
-{
-	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
-	struct gntdev_grant_map *map;
-	int ret = 0;
-
-	if (mmu_notifier_range_blockable(range))
-		mutex_lock(&priv->lock);
-	else if (!mutex_trylock(&priv->lock))
-		return -EAGAIN;
-
-	list_for_each_entry(map, &priv->maps, next) {
-		ret = unmap_if_in_range(map, range->start, range->end,
-					mmu_notifier_range_blockable(range));
-		if (ret)
-			goto out_unlock;
-	}
-	list_for_each_entry(map, &priv->freeable_maps, next) {
-		ret = unmap_if_in_range(map, range->start, range->end,
-					mmu_notifier_range_blockable(range));
-		if (ret)
-			goto out_unlock;
-	}
-
-out_unlock:
-	mutex_unlock(&priv->lock);
-
-	return ret;
-}
-
-static void mn_release(struct mmu_notifier *mn,
-		       struct mm_struct *mm)
-{
-	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
-	struct gntdev_grant_map *map;
-	int err;
-
-	mutex_lock(&priv->lock);
-	list_for_each_entry(map, &priv->maps, next) {
-		if (!map->vma)
-			continue;
-		pr_debug("map %d+%d (%lx %lx)\n",
-				map->index, map->count,
-				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
-	}
-	list_for_each_entry(map, &priv->freeable_maps, next) {
-		if (!map->vma)
-			continue;
-		pr_debug("map %d+%d (%lx %lx)\n",
-				map->index, map->count,
-				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
-	}
-	mutex_unlock(&priv->lock);
+	return true;
 }
 
-static const struct mmu_notifier_ops gntdev_mmu_ops = {
-	.release                = mn_release,
-	.invalidate_range_start = mn_invl_range_start,
+static const struct mmu_range_notifier_ops gntdev_mmu_ops = {
+	.invalidate = gntdev_invalidate,
 };
 
 /* ------------------------------------------------------------------ */
@@ -594,7 +513,6 @@ static int gntdev_open(struct inode *inode, struct file *flip)
 		return -ENOMEM;
 
 	INIT_LIST_HEAD(&priv->maps);
-	INIT_LIST_HEAD(&priv->freeable_maps);
 	mutex_init(&priv->lock);
 
 #ifdef CONFIG_XEN_GNTDEV_DMABUF
@@ -606,17 +524,6 @@ static int gntdev_open(struct inode *inode, struct file *flip)
 	}
 #endif
 
-	if (use_ptemod) {
-		priv->mm = get_task_mm(current);
-		if (!priv->mm) {
-			kfree(priv);
-			return -ENOMEM;
-		}
-		priv->mn.ops = &gntdev_mmu_ops;
-		ret = mmu_notifier_register(&priv->mn, priv->mm);
-		mmput(priv->mm);
-	}
-
 	if (ret) {
 		kfree(priv);
 		return ret;
@@ -653,16 +560,12 @@ static int gntdev_release(struct inode *inode, struct file *flip)
 		list_del(&map->next);
 		gntdev_put_map(NULL /* already removed */, map);
 	}
-	WARN_ON(!list_empty(&priv->freeable_maps));
 	mutex_unlock(&priv->lock);
 
 #ifdef CONFIG_XEN_GNTDEV_DMABUF
 	gntdev_dmabuf_fini(priv->dmabuf_priv);
 #endif
 
-	if (use_ptemod)
-		mmu_notifier_unregister(&priv->mn, priv->mm);
-
 	kfree(priv);
 	return 0;
 }
@@ -723,8 +626,6 @@ static long gntdev_ioctl_unmap_grant_ref(struct gntdev_priv *priv,
 	map = gntdev_find_map_index(priv, op.index >> PAGE_SHIFT, op.count);
 	if (map) {
 		list_del(&map->next);
-		if (populate_freeable_maps)
-			list_add_tail(&map->next, &priv->freeable_maps);
 		err = 0;
 	}
 	mutex_unlock(&priv->lock);
@@ -1096,11 +997,6 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	if (use_ptemod && map->vma)
 		goto unlock_out;
-	if (use_ptemod && priv->mm != vma->vm_mm) {
-		pr_warn("Huh? Other mm?\n");
-		goto unlock_out;
-	}
-
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
@@ -1111,10 +1007,6 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		vma->vm_flags |= VM_DONTCOPY;
 
 	vma->vm_private_data = map;
-
-	if (use_ptemod)
-		map->vma = vma;
-
 	if (map->flags) {
 		if ((vma->vm_flags & VM_WRITE) &&
 				(map->flags & GNTMAP_readonly))
@@ -1125,8 +1017,28 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 			map->flags |= GNTMAP_readonly;
 	}
 
+	if (use_ptemod) {
+		map->vma = vma;
+		err = mmu_range_notifier_insert_locked(
+			&map->notifier, vma->vm_start,
+			vma->vm_end - vma->vm_start, vma->vm_mm);
+		if (err)
+			goto out_unlock_put;
+	}
 	mutex_unlock(&priv->lock);
 
+	/*
+	 * gntdev takes the address of the PTE in find_grant_ptes() and passes
+	 * it to the hypervisor in gntdev_map_grant_pages(). The purpose of
+	 * the notifier is to prevent the hypervisor pointer to the PTE from
+	 * going stale.
+	 *
+	 * Since this vma's mappings can't be touched without the mmap_sem,
+	 * and we are holding it now, there is no need for the notifier_range
+	 * locking pattern.
+	 */
+	mmu_range_read_begin(&map->notifier);
+
 	if (use_ptemod) {
 		map->pages_vm_start = vma->vm_start;
 		err = apply_to_page_range(vma->vm_mm, vma->vm_start,
@@ -1175,8 +1087,11 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 	mutex_unlock(&priv->lock);
 out_put_map:
 	if (use_ptemod) {
-		map->vma = NULL;
 		unmap_grant_pages(map, 0, map->count);
+		if (map->vma) {
+			mmu_range_notifier_remove(&map->notifier);
+			map->vma = NULL;
+		}
 	}
 	gntdev_put_map(priv, map);
 	return err;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (8 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 09/15] xen/gntdev: use mmu_range_notifier_insert Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror Jason Gunthorpe
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, nouveau

From: Jason Gunthorpe <jgg@mellanox.com>

There is no reason to get the invalidate_range_start() callback via an
indirection through hmm_mirror, just register a normal notifier directly.

Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/gpu/drm/nouveau/nouveau_svm.c | 95 ++++++++++++++++++---------
 1 file changed, 63 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 668d4bd0c118f1..577f8811925a59 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -88,6 +88,7 @@ nouveau_ivmm_find(struct nouveau_svm *svm, u64 inst)
 }
 
 struct nouveau_svmm {
+	struct mmu_notifier notifier;
 	struct nouveau_vmm *vmm;
 	struct {
 		unsigned long start;
@@ -96,7 +97,6 @@ struct nouveau_svmm {
 
 	struct mutex mutex;
 
-	struct mm_struct *mm;
 	struct hmm_mirror mirror;
 };
 
@@ -251,10 +251,11 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit)
 }
 
 static int
-nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror,
-					const struct mmu_notifier_range *update)
+nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn,
+				    const struct mmu_notifier_range *update)
 {
-	struct nouveau_svmm *svmm = container_of(mirror, typeof(*svmm), mirror);
+	struct nouveau_svmm *svmm =
+		container_of(mn, struct nouveau_svmm, notifier);
 	unsigned long start = update->start;
 	unsigned long limit = update->end;
 
@@ -264,6 +265,9 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror,
 	SVMM_DBG(svmm, "invalidate %016lx-%016lx", start, limit);
 
 	mutex_lock(&svmm->mutex);
+	if (unlikely(!svmm->vmm))
+		goto out;
+
 	if (limit > svmm->unmanaged.start && start < svmm->unmanaged.limit) {
 		if (start < svmm->unmanaged.start) {
 			nouveau_svmm_invalidate(svmm, start,
@@ -273,19 +277,31 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror,
 	}
 
 	nouveau_svmm_invalidate(svmm, start, limit);
+
+out:
 	mutex_unlock(&svmm->mutex);
 	return 0;
 }
 
-static void
-nouveau_svmm_release(struct hmm_mirror *mirror)
+static void nouveau_svmm_free_notifier(struct mmu_notifier *mn)
+{
+	kfree(container_of(mn, struct nouveau_svmm, notifier));
+}
+
+static const struct mmu_notifier_ops nouveau_mn_ops = {
+	.invalidate_range_start = nouveau_svmm_invalidate_range_start,
+	.free_notifier = nouveau_svmm_free_notifier,
+};
+
+static int
+nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror,
+					const struct mmu_notifier_range *update)
 {
+	return 0;
 }
 
-static const struct hmm_mirror_ops
-nouveau_svmm = {
+static const struct hmm_mirror_ops nouveau_svmm = {
 	.sync_cpu_device_pagetables = nouveau_svmm_sync_cpu_device_pagetables,
-	.release = nouveau_svmm_release,
 };
 
 void
@@ -294,7 +310,10 @@ nouveau_svmm_fini(struct nouveau_svmm **psvmm)
 	struct nouveau_svmm *svmm = *psvmm;
 	if (svmm) {
 		hmm_mirror_unregister(&svmm->mirror);
-		kfree(*psvmm);
+		mutex_lock(&svmm->mutex);
+		svmm->vmm = NULL;
+		mutex_unlock(&svmm->mutex);
+		mmu_notifier_put(&svmm->notifier);
 		*psvmm = NULL;
 	}
 }
@@ -320,7 +339,7 @@ nouveau_svmm_init(struct drm_device *dev, void *data,
 	mutex_lock(&cli->mutex);
 	if (cli->svm.cli) {
 		ret = -EBUSY;
-		goto done;
+		goto out_free;
 	}
 
 	/* Allocate a new GPU VMM that can support SVM (managed by the
@@ -335,24 +354,33 @@ nouveau_svmm_init(struct drm_device *dev, void *data,
 				.fault_replay = true,
 			    }, sizeof(struct gp100_vmm_v0), &cli->svm.vmm);
 	if (ret)
-		goto done;
+		goto out_free;
 
-	/* Enable HMM mirroring of CPU address-space to VMM. */
-	svmm->mm = get_task_mm(current);
-	down_write(&svmm->mm->mmap_sem);
+	down_write(&current->mm->mmap_sem);
 	svmm->mirror.ops = &nouveau_svmm;
-	ret = hmm_mirror_register(&svmm->mirror, svmm->mm);
-	if (ret == 0) {
-		cli->svm.svmm = svmm;
-		cli->svm.cli = cli;
-	}
-	up_write(&svmm->mm->mmap_sem);
-	mmput(svmm->mm);
+	ret = hmm_mirror_register(&svmm->mirror, current->mm);
+	if (ret)
+		goto out_mm_unlock;
 
-done:
+	svmm->notifier.ops = &nouveau_mn_ops;
+	ret = __mmu_notifier_register(&svmm->notifier, current->mm);
 	if (ret)
-		nouveau_svmm_fini(&svmm);
+		goto out_hmm_unregister;
+	/* Note, ownership of svmm transfers to mmu_notifier */
+
+	cli->svm.svmm = svmm;
+	cli->svm.cli = cli;
+	up_write(&current->mm->mmap_sem);
 	mutex_unlock(&cli->mutex);
+	return 0;
+
+out_hmm_unregister:
+	hmm_mirror_unregister(&svmm->mirror);
+out_mm_unlock:
+	up_write(&current->mm->mmap_sem);
+out_free:
+	mutex_unlock(&cli->mutex);
+	kfree(svmm);
 	return ret;
 }
 
@@ -494,12 +522,12 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range)
 
 	ret = hmm_range_register(range, &svmm->mirror);
 	if (ret) {
-		up_read(&svmm->mm->mmap_sem);
+		up_read(&svmm->notifier.mm->mmap_sem);
 		return (int)ret;
 	}
 
 	if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) {
-		up_read(&svmm->mm->mmap_sem);
+		up_read(&svmm->notifier.mm->mmap_sem);
 		return -EBUSY;
 	}
 
@@ -507,7 +535,7 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range)
 	if (ret <= 0) {
 		if (ret == 0)
 			ret = -EBUSY;
-		up_read(&svmm->mm->mmap_sem);
+		up_read(&svmm->notifier.mm->mmap_sem);
 		hmm_range_unregister(range);
 		return ret;
 	}
@@ -587,12 +615,15 @@ nouveau_svm_fault(struct nvif_notify *notify)
 	args.i.p.version = 0;
 
 	for (fi = 0; fn = fi + 1, fi < buffer->fault_nr; fi = fn) {
+		struct mm_struct *mm;
+
 		/* Cancel any faults from non-SVM channels. */
 		if (!(svmm = buffer->fault[fi]->svmm)) {
 			nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
 			continue;
 		}
 		SVMM_DBG(svmm, "addr %016llx", buffer->fault[fi]->addr);
+		mm = svmm->notifier.mm;
 
 		/* We try and group handling of faults within a small
 		 * window into a single update.
@@ -609,11 +640,11 @@ nouveau_svm_fault(struct nvif_notify *notify)
 		/* Intersect fault window with the CPU VMA, cancelling
 		 * the fault if the address is invalid.
 		 */
-		down_read(&svmm->mm->mmap_sem);
-		vma = find_vma_intersection(svmm->mm, start, limit);
+		down_read(&mm->mmap_sem);
+		vma = find_vma_intersection(mm, start, limit);
 		if (!vma) {
 			SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit);
-			up_read(&svmm->mm->mmap_sem);
+			up_read(&mm->mmap_sem);
 			nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
 			continue;
 		}
@@ -623,7 +654,7 @@ nouveau_svm_fault(struct nvif_notify *notify)
 
 		if (buffer->fault[fi]->addr != start) {
 			SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr);
-			up_read(&svmm->mm->mmap_sem);
+			up_read(&mm->mmap_sem);
 			nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
 			continue;
 		}
@@ -704,7 +735,7 @@ nouveau_svm_fault(struct nvif_notify *notify)
 						NULL);
 			svmm->vmm->vmm.object.client->super = false;
 			mutex_unlock(&svmm->mutex);
-			up_read(&svmm->mm->mmap_sem);
+			up_read(&mm->mmap_sem);
 		}
 
 		/* Cancel any faults in the window whose pages didn't manage
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (9 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 12/15] drm/amdgpu: Call find_vma under mmap_sem Jason Gunthorpe
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, nouveau

From: Jason Gunthorpe <jgg@mellanox.com>

Remove the hmm_mirror object and use the mmu_range_notifier API instead
for the range, and use the normal mmu_notifier API for the general
invalidation callback.

While here re-organize the pagefault path so the locking pattern is clear.

nouveau is the only driver that uses a temporary range object and instead
forwards nearly every invalidation range directly to the HW. While this is
not how the mmu_range_notifier was intended to be used, the overheads on
the pagefaulting path are similar to the existing hmm_mirror version.
Particularly since the interval tree will be small.

Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/gpu/drm/nouveau/nouveau_svm.c | 178 ++++++++++++++------------
 1 file changed, 98 insertions(+), 80 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 577f8811925a59..712c99918551bc 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -96,8 +96,6 @@ struct nouveau_svmm {
 	} unmanaged;
 
 	struct mutex mutex;
-
-	struct hmm_mirror mirror;
 };
 
 #define SVMM_DBG(s,f,a...)                                                     \
@@ -293,23 +291,11 @@ static const struct mmu_notifier_ops nouveau_mn_ops = {
 	.free_notifier = nouveau_svmm_free_notifier,
 };
 
-static int
-nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror,
-					const struct mmu_notifier_range *update)
-{
-	return 0;
-}
-
-static const struct hmm_mirror_ops nouveau_svmm = {
-	.sync_cpu_device_pagetables = nouveau_svmm_sync_cpu_device_pagetables,
-};
-
 void
 nouveau_svmm_fini(struct nouveau_svmm **psvmm)
 {
 	struct nouveau_svmm *svmm = *psvmm;
 	if (svmm) {
-		hmm_mirror_unregister(&svmm->mirror);
 		mutex_lock(&svmm->mutex);
 		svmm->vmm = NULL;
 		mutex_unlock(&svmm->mutex);
@@ -357,15 +343,10 @@ nouveau_svmm_init(struct drm_device *dev, void *data,
 		goto out_free;
 
 	down_write(&current->mm->mmap_sem);
-	svmm->mirror.ops = &nouveau_svmm;
-	ret = hmm_mirror_register(&svmm->mirror, current->mm);
-	if (ret)
-		goto out_mm_unlock;
-
 	svmm->notifier.ops = &nouveau_mn_ops;
 	ret = __mmu_notifier_register(&svmm->notifier, current->mm);
 	if (ret)
-		goto out_hmm_unregister;
+		goto out_mm_unlock;
 	/* Note, ownership of svmm transfers to mmu_notifier */
 
 	cli->svm.svmm = svmm;
@@ -374,8 +355,6 @@ nouveau_svmm_init(struct drm_device *dev, void *data,
 	mutex_unlock(&cli->mutex);
 	return 0;
 
-out_hmm_unregister:
-	hmm_mirror_unregister(&svmm->mirror);
 out_mm_unlock:
 	up_write(&current->mm->mmap_sem);
 out_free:
@@ -503,43 +482,89 @@ nouveau_svm_fault_cache(struct nouveau_svm *svm,
 		fault->inst, fault->addr, fault->access);
 }
 
-static inline bool
-nouveau_range_done(struct hmm_range *range)
+struct svm_notifier {
+	struct mmu_range_notifier notifier;
+	struct nouveau_svmm *svmm;
+};
+
+static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
+					 const struct mmu_notifier_range *range)
 {
-	bool ret = hmm_range_valid(range);
+	struct svm_notifier *sn =
+		container_of(mrn, struct svm_notifier, notifier);
 
-	hmm_range_unregister(range);
-	return ret;
+	/*
+	 * serializes the update to mrn->invalidate_seq done by caller and
+	 * prevents invalidation of the PTE from progressing while HW is being
+	 * programmed. This is very hacky and only works because the normal
+	 * notifier that does invalidation is always called after the range
+	 * notifier.
+	 */
+	if (mmu_notifier_range_blockable(range))
+		mutex_lock(&sn->svmm->mutex);
+	else if (!mutex_trylock(&sn->svmm->mutex))
+		return false;
+	mutex_unlock(&sn->svmm->mutex);
+	return true;
 }
 
-static int
-nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range)
+static const struct mmu_range_notifier_ops nouveau_svm_mrn_ops = {
+	.invalidate = nouveau_svm_range_invalidate,
+};
+
+static int nouveau_range_fault(struct nouveau_svmm *svmm,
+			       struct nouveau_drm *drm, void *data, u32 size,
+			       u64 *pfns,
+			       struct svm_notifier *notifier)
 {
+	unsigned long timeout =
+		jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
+	/* Have HMM fault pages within the fault window to the GPU. */
+	struct hmm_range range = {
+		.notifier = &notifier->notifier,
+		.start = notifier->notifier.interval_tree.start,
+		.end = notifier->notifier.interval_tree.last + 1,
+		.pfns = pfns,
+		.flags = nouveau_svm_pfn_flags,
+		.values = nouveau_svm_pfn_values,
+		.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT,
+	};
+	struct mm_struct *mm = notifier->notifier.mm;
 	long ret;
 
-	range->default_flags = 0;
-	range->pfn_flags_mask = -1UL;
+	while (true) {
+		if (time_after(jiffies, timeout))
+			return -EBUSY;
 
-	ret = hmm_range_register(range, &svmm->mirror);
-	if (ret) {
-		up_read(&svmm->notifier.mm->mmap_sem);
-		return (int)ret;
-	}
+		range.notifier_seq = mmu_range_read_begin(range.notifier);
+		range.default_flags = 0;
+		range.pfn_flags_mask = -1UL;
+		down_read(&mm->mmap_sem);
+		ret = hmm_range_fault(&range, 0);
+		up_read(&mm->mmap_sem);
+		if (ret <= 0) {
+			if (ret == 0 || ret == -EBUSY)
+				continue;
+			return ret;
+		}
 
-	if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) {
-		up_read(&svmm->notifier.mm->mmap_sem);
-		return -EBUSY;
+		mutex_lock(&svmm->mutex);
+		if (mmu_range_read_retry(range.notifier,
+					 range.notifier_seq)) {
+			mutex_unlock(&svmm->mutex);
+			continue;
+		}
+		break;
 	}
 
-	ret = hmm_range_fault(range, 0);
-	if (ret <= 0) {
-		if (ret == 0)
-			ret = -EBUSY;
-		up_read(&svmm->notifier.mm->mmap_sem);
-		hmm_range_unregister(range);
-		return ret;
-	}
-	return 0;
+	nouveau_dmem_convert_pfn(drm, &range);
+
+	svmm->vmm->vmm.object.client->super = true;
+	ret = nvif_object_ioctl(&svmm->vmm->vmm.object, data, size, NULL);
+	svmm->vmm->vmm.object.client->super = false;
+	mutex_unlock(&svmm->mutex);
+
+	return ret;
 }
 
 static int
@@ -559,7 +584,6 @@ nouveau_svm_fault(struct nvif_notify *notify)
 		} i;
 		u64 phys[16];
 	} args;
-	struct hmm_range range;
 	struct vm_area_struct *vma;
 	u64 inst, start, limit;
 	int fi, fn, pi, fill;
@@ -615,6 +639,7 @@ nouveau_svm_fault(struct nvif_notify *notify)
 	args.i.p.version = 0;
 
 	for (fi = 0; fn = fi + 1, fi < buffer->fault_nr; fi = fn) {
+		struct svm_notifier notifier;
 		struct mm_struct *mm;
 
 		/* Cancel any faults from non-SVM channels. */
@@ -623,7 +648,6 @@ nouveau_svm_fault(struct nvif_notify *notify)
 			continue;
 		}
 		SVMM_DBG(svmm, "addr %016llx", buffer->fault[fi]->addr);
-		mm = svmm->notifier.mm;
 
 		/* We try and group handling of faults within a small
 		 * window into a single update.
@@ -637,6 +661,12 @@ nouveau_svm_fault(struct nvif_notify *notify)
 			start = max_t(u64, start, svmm->unmanaged.limit);
 		SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit);
 
+		mm = svmm->notifier.mm;
+		if (!mmget_not_zero(mm)) {
+			nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
+			continue;
+		}
+
 		/* Intersect fault window with the CPU VMA, cancelling
 		 * the fault if the address is invalid.
 		 */
@@ -645,16 +675,18 @@ nouveau_svm_fault(struct nvif_notify *notify)
 		if (!vma) {
 			SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit);
 			up_read(&mm->mmap_sem);
+			mmput(mm);
 			nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
 			continue;
 		}
 		start = max_t(u64, start, vma->vm_start);
 		limit = min_t(u64, limit, vma->vm_end);
+		up_read(&mm->mmap_sem);
 		SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit);
 
 		if (buffer->fault[fi]->addr != start) {
 			SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr);
-			up_read(&mm->mmap_sem);
+			mmput(mm);
 			nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
 			continue;
 		}
@@ -710,33 +742,19 @@ nouveau_svm_fault(struct nvif_notify *notify)
 			 args.i.p.addr,
 			 args.i.p.addr + args.i.p.size, fn - fi);
 
-		/* Have HMM fault pages within the fault window to the GPU. */
-		range.start = args.i.p.addr;
-		range.end = args.i.p.addr + args.i.p.size;
-		range.pfns = args.phys;
-		range.flags = nouveau_svm_pfn_flags;
-		range.values = nouveau_svm_pfn_values;
-		range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT;
-again:
-		ret = nouveau_range_fault(svmm, &range);
-		if (ret == 0) {
-			mutex_lock(&svmm->mutex);
-			if (!nouveau_range_done(&range)) {
-				mutex_unlock(&svmm->mutex);
-				goto again;
-			}
-
-			nouveau_dmem_convert_pfn(svm->drm, &range);
-
-			svmm->vmm->vmm.object.client->super = true;
-			ret = nvif_object_ioctl(&svmm->vmm->vmm.object,
-						&args, sizeof(args.i) +
-						pi * sizeof(args.phys[0]),
-						NULL);
-			svmm->vmm->vmm.object.client->super = false;
-			mutex_unlock(&svmm->mutex);
-			up_read(&mm->mmap_sem);
+		notifier.svmm = svmm;
+		notifier.notifier.ops = &nouveau_svm_mrn_ops;
+		ret = mmu_range_notifier_insert(&notifier.notifier,
+						args.i.p.addr, args.i.p.size,
+						svmm->notifier.mm);
+		if (!ret) {
+			ret = nouveau_range_fault(
+				svmm, svm->drm, &args,
+				sizeof(args.i) + pi * sizeof(args.phys[0]),
+				args.phys, &notifier);
+			mmu_range_notifier_remove(&notifier.notifier);
 		}
+		mmput(mm);
 
 		/* Cancel any faults in the window whose pages didn't manage
 		 * to keep their valid bit, or stay writeable when required.
@@ -745,10 +763,10 @@ nouveau_svm_fault(struct nvif_notify *notify)
 		 */
 		while (fi < fn) {
 			struct nouveau_svm_fault *fault = buffer->fault[fi++];
-			pi = (fault->addr - range.start) >> PAGE_SHIFT;
+			pi = (fault->addr - args.i.p.addr) >> PAGE_SHIFT;
 			if (ret ||
-			     !(range.pfns[pi] & NVIF_VMM_PFNMAP_V0_V) ||
-			    (!(range.pfns[pi] & NVIF_VMM_PFNMAP_V0_W) &&
+			     !(args.phys[pi] & NVIF_VMM_PFNMAP_V0_V) ||
+			    (!(args.phys[pi] & NVIF_VMM_PFNMAP_V0_W) &&
 			     fault->access != 0 && fault->access != 3)) {
 				nouveau_svm_fault_cancel_fault(svm, fault);
 				continue;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 12/15] drm/amdgpu: Call find_vma under mmap_sem
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (10 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror Jason Gunthorpe
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Alex Deucher, Christian König,
	David Zhou

From: Jason Gunthorpe <jgg@mellanox.com>

find_vma() must be called under the mmap_sem, reorganize this code to
do the vma check after entering the lock.

Further, fix the unlocked use of struct task_struct's mm, instead use
the mm from hmm_mirror which has an active mm_grab. Also the mm_grab
must be converted to a mm_get before acquiring mmap_sem or calling
find_vma().

Fixes: 66c45500bfdc ("drm/amdgpu: use new HMM APIs and helpers")
Fixes: 0919195f2b0d ("drm/amdgpu: Enable amdgpu_ttm_tt_get_user_pages in worker threads")
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: David (ChunMing) Zhou <David1.Zhou@amd.com>
Cc: amd-gfx@lists.freedesktop.org
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 37 ++++++++++++++-----------
 1 file changed, 21 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index dff41d0a85fe96..c0e41f1f0c2365 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -35,6 +35,7 @@
 #include <linux/hmm.h>
 #include <linux/pagemap.h>
 #include <linux/sched/task.h>
+#include <linux/sched/mm.h>
 #include <linux/seq_file.h>
 #include <linux/slab.h>
 #include <linux/swap.h>
@@ -788,7 +789,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 	struct hmm_mirror *mirror = bo->mn ? &bo->mn->mirror : NULL;
 	struct ttm_tt *ttm = bo->tbo.ttm;
 	struct amdgpu_ttm_tt *gtt = (void *)ttm;
-	struct mm_struct *mm = gtt->usertask->mm;
+	struct mm_struct *mm;
 	unsigned long start = gtt->userptr;
 	struct vm_area_struct *vma;
 	struct hmm_range *range;
@@ -796,25 +797,14 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 	uint64_t *pfns;
 	int r = 0;
 
-	if (!mm) /* Happens during process shutdown */
-		return -ESRCH;
-
 	if (unlikely(!mirror)) {
 		DRM_DEBUG_DRIVER("Failed to get hmm_mirror\n");
-		r = -EFAULT;
-		goto out;
+		return -EFAULT;
 	}
 
-	vma = find_vma(mm, start);
-	if (unlikely(!vma || start < vma->vm_start)) {
-		r = -EFAULT;
-		goto out;
-	}
-	if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) &&
-		vma->vm_file)) {
-		r = -EPERM;
-		goto out;
-	}
+	mm = mirror->hmm->mmu_notifier.mm;
+	if (!mmget_not_zero(mm)) /* Happens during process shutdown */
+		return -ESRCH;
 
 	range = kzalloc(sizeof(*range), GFP_KERNEL);
 	if (unlikely(!range)) {
@@ -847,6 +837,17 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 	hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT);
 
 	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, start);
+	if (unlikely(!vma || start < vma->vm_start)) {
+		r = -EFAULT;
+		goto out_unlock;
+	}
+	if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) &&
+		vma->vm_file)) {
+		r = -EPERM;
+		goto out_unlock;
+	}
+
 	r = hmm_range_fault(range, 0);
 	up_read(&mm->mmap_sem);
 
@@ -865,15 +866,19 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 	}
 
 	gtt->range = range;
+	mmput(mm);
 
 	return 0;
 
+out_unlock:
+	up_read(&mm->mmap_sem);
 out_free_pfns:
 	hmm_range_unregister(range);
 	kvfree(pfns);
 out_free_ranges:
 	kfree(range);
 out:
+	mmput(mm);
 	return r;
 }
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (11 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 12/15] drm/amdgpu: Call find_vma under mmap_sem Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 14/15] drm/amdgpu: Use mmu_range_notifier " Jason Gunthorpe
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Alex Deucher, Christian König,
	David Zhou

From: Jason Gunthorpe <jgg@mellanox.com>

Remove the interval tree in the driver and rely on the tree maintained by
the mmu_notifier for delivering mmu_notifier invalidation callbacks.

For some reason amdgpu has a very complicated arrangement where it tries
to prevent duplicate entries in the interval_tree, this is not necessary,
each amdgpu_bo can be its own stand alone entry. interval_tree already
allows duplicates and overlaps in the tree.

Also, there is no need to remove entries upon a release callback, the
mmu_range API safely allows objects to remain registered beyond the
lifetime of the mm. The driver only has to stop touching the pages during
release.

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: David (ChunMing) Zhou <David1.Zhou@amd.com>
Cc: amd-gfx@lists.freedesktop.org
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |   2 +
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        | 341 ++++--------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |   4 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |  13 +-
 6 files changed, 84 insertions(+), 282 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index bd37df5dd6d048..60591a5d420021 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1006,6 +1006,8 @@ struct amdgpu_device {
 	struct mutex  lock_reset;
 	struct amdgpu_doorbell_index doorbell_index;
 
+	struct mutex			notifier_lock;
+
 	int asic_reset_res;
 	struct work_struct		xgmi_reset_work;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 6d021ecc8d598f..47700302a08b7f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -481,8 +481,7 @@ static void remove_kgd_mem_from_kfd_bo_list(struct kgd_mem *mem,
  *
  * Returns 0 for success, negative errno for errors.
  */
-static int init_user_pages(struct kgd_mem *mem, struct mm_struct *mm,
-			   uint64_t user_addr)
+static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr)
 {
 	struct amdkfd_process_info *process_info = mem->process_info;
 	struct amdgpu_bo *bo = mem->bo;
@@ -1195,7 +1194,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
 	add_kgd_mem_to_kfd_bo_list(*mem, avm->process_info, user_addr);
 
 	if (user_addr) {
-		ret = init_user_pages(*mem, current->mm, user_addr);
+		ret = init_user_pages(*mem, user_addr);
 		if (ret)
 			goto allocate_init_user_pages_failed;
 	}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 5a1939dbd4e3e6..38f97998aaddb2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2633,6 +2633,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	mutex_init(&adev->virt.vf_errors.lock);
 	hash_init(adev->mn_hash);
 	mutex_init(&adev->lock_reset);
+	mutex_init(&adev->notifier_lock);
 	mutex_init(&adev->virt.dpm_mutex);
 	mutex_init(&adev->psp.mutex);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 31d4deb5d29484..4ffd7b90f4d907 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -50,66 +50,6 @@
 #include "amdgpu.h"
 #include "amdgpu_amdkfd.h"
 
-/**
- * struct amdgpu_mn_node
- *
- * @it: interval node defining start-last of the affected address range
- * @bos: list of all BOs in the affected address range
- *
- * Manages all BOs which are affected of a certain range of address space.
- */
-struct amdgpu_mn_node {
-	struct interval_tree_node	it;
-	struct list_head		bos;
-};
-
-/**
- * amdgpu_mn_destroy - destroy the HMM mirror
- *
- * @work: previously sheduled work item
- *
- * Lazy destroys the notifier from a work item
- */
-static void amdgpu_mn_destroy(struct work_struct *work)
-{
-	struct amdgpu_mn *amn = container_of(work, struct amdgpu_mn, work);
-	struct amdgpu_device *adev = amn->adev;
-	struct amdgpu_mn_node *node, *next_node;
-	struct amdgpu_bo *bo, *next_bo;
-
-	mutex_lock(&adev->mn_lock);
-	down_write(&amn->lock);
-	hash_del(&amn->node);
-	rbtree_postorder_for_each_entry_safe(node, next_node,
-					     &amn->objects.rb_root, it.rb) {
-		list_for_each_entry_safe(bo, next_bo, &node->bos, mn_list) {
-			bo->mn = NULL;
-			list_del_init(&bo->mn_list);
-		}
-		kfree(node);
-	}
-	up_write(&amn->lock);
-	mutex_unlock(&adev->mn_lock);
-
-	hmm_mirror_unregister(&amn->mirror);
-	kfree(amn);
-}
-
-/**
- * amdgpu_hmm_mirror_release - callback to notify about mm destruction
- *
- * @mirror: the HMM mirror (mm) this callback is about
- *
- * Shedule a work item to lazy destroy HMM mirror.
- */
-static void amdgpu_hmm_mirror_release(struct hmm_mirror *mirror)
-{
-	struct amdgpu_mn *amn = container_of(mirror, struct amdgpu_mn, mirror);
-
-	INIT_WORK(&amn->work, amdgpu_mn_destroy);
-	schedule_work(&amn->work);
-}
-
 /**
  * amdgpu_mn_lock - take the write side lock for this notifier
  *
@@ -133,157 +73,86 @@ void amdgpu_mn_unlock(struct amdgpu_mn *mn)
 }
 
 /**
- * amdgpu_mn_read_lock - take the read side lock for this notifier
- *
- * @amn: our notifier
- */
-static int amdgpu_mn_read_lock(struct amdgpu_mn *amn, bool blockable)
-{
-	if (blockable)
-		down_read(&amn->lock);
-	else if (!down_read_trylock(&amn->lock))
-		return -EAGAIN;
-
-	return 0;
-}
-
-/**
- * amdgpu_mn_read_unlock - drop the read side lock for this notifier
- *
- * @amn: our notifier
- */
-static void amdgpu_mn_read_unlock(struct amdgpu_mn *amn)
-{
-	up_read(&amn->lock);
-}
-
-/**
- * amdgpu_mn_invalidate_node - unmap all BOs of a node
+ * amdgpu_mn_invalidate_gfx - callback to notify about mm change
  *
- * @node: the node with the BOs to unmap
- * @start: start of address range affected
- * @end: end of address range affected
+ * @mrn: the range (mm) is about to update
+ * @range: details on the invalidation
  *
  * Block for operations on BOs to finish and mark pages as accessed and
  * potentially dirty.
  */
-static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node,
-				      unsigned long start,
-				      unsigned long end)
+static bool amdgpu_mn_invalidate_gfx(struct mmu_range_notifier *mrn,
+				     const struct mmu_notifier_range *range)
 {
-	struct amdgpu_bo *bo;
+	struct amdgpu_bo *bo = container_of(mrn, struct amdgpu_bo, notifier);
+	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 	long r;
 
-	list_for_each_entry(bo, &node->bos, mn_list) {
-
-		if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end))
-			continue;
-
-		r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
-			true, false, MAX_SCHEDULE_TIMEOUT);
-		if (r <= 0)
-			DRM_ERROR("(%ld) failed to wait for user bo\n", r);
-	}
+	/* FIXME: Is this necessary? */
+	if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, range->start,
+					  range->end))
+		return true;
+
+	if (!mmu_notifier_range_blockable(range))
+		return false;
+
+	mutex_lock(&adev->notifier_lock);
+	r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false,
+				      MAX_SCHEDULE_TIMEOUT);
+	mutex_unlock(&adev->notifier_lock);
+	if (r <= 0)
+		DRM_ERROR("(%ld) failed to wait for user bo\n", r);
+	return true;
 }
 
+static const struct mmu_range_notifier_ops amdgpu_mn_gfx_ops = {
+	.invalidate = amdgpu_mn_invalidate_gfx,
+};
+
 /**
- * amdgpu_mn_sync_pagetables_gfx - callback to notify about mm change
+ * amdgpu_mn_invalidate_hsa - callback to notify about mm change
  *
- * @mirror: the hmm_mirror (mm) is about to update
- * @update: the update start, end address
+ * @mrn: the range (mm) is about to update
+ * @range: details on the invalidation
  *
- * Block for operations on BOs to finish and mark pages as accessed and
- * potentially dirty.
+ * We temporarily evict the BO attached to this range. This necessitates
+ * evicting all user-mode queues of the process.
  */
-static int
-amdgpu_mn_sync_pagetables_gfx(struct hmm_mirror *mirror,
-			      const struct mmu_notifier_range *update)
+static bool amdgpu_mn_invalidate_hsa(struct mmu_range_notifier *mrn,
+				     const struct mmu_notifier_range *range)
 {
-	struct amdgpu_mn *amn = container_of(mirror, struct amdgpu_mn, mirror);
-	unsigned long start = update->start;
-	unsigned long end = update->end;
-	bool blockable = mmu_notifier_range_blockable(update);
-	struct interval_tree_node *it;
-
-	/* notification is exclusive, but interval is inclusive */
-	end -= 1;
-
-	/* TODO we should be able to split locking for interval tree and
-	 * amdgpu_mn_invalidate_node
-	 */
-	if (amdgpu_mn_read_lock(amn, blockable))
-		return -EAGAIN;
-
-	it = interval_tree_iter_first(&amn->objects, start, end);
-	while (it) {
-		struct amdgpu_mn_node *node;
-
-		if (!blockable) {
-			amdgpu_mn_read_unlock(amn);
-			return -EAGAIN;
-		}
+	struct amdgpu_bo *bo = container_of(mrn, struct amdgpu_bo, notifier);
+	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 
-		node = container_of(it, struct amdgpu_mn_node, it);
-		it = interval_tree_iter_next(it, start, end);
+	/* FIXME: Is this necessary? */
+	if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, range->start,
+					  range->end))
+		return true;
 
-		amdgpu_mn_invalidate_node(node, start, end);
-	}
+	if (!mmu_notifier_range_blockable(range))
+		return false;
 
-	amdgpu_mn_read_unlock(amn);
+	mutex_lock(&adev->notifier_lock);
+	amdgpu_amdkfd_evict_userptr(bo->kfd_bo, bo->notifier.mm);
+	mutex_unlock(&adev->notifier_lock);
 
-	return 0;
+	return true;
 }
 
-/**
- * amdgpu_mn_sync_pagetables_hsa - callback to notify about mm change
- *
- * @mirror: the hmm_mirror (mm) is about to update
- * @update: the update start, end address
- *
- * We temporarily evict all BOs between start and end. This
- * necessitates evicting all user-mode queues of the process. The BOs
- * are restorted in amdgpu_mn_invalidate_range_end_hsa.
- */
-static int
-amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror,
-			      const struct mmu_notifier_range *update)
+static const struct mmu_range_notifier_ops amdgpu_mn_hsa_ops = {
+	.invalidate = amdgpu_mn_invalidate_hsa,
+};
+
+static int amdgpu_mn_sync_pagetables(struct hmm_mirror *mirror,
+				     const struct mmu_notifier_range *update)
 {
 	struct amdgpu_mn *amn = container_of(mirror, struct amdgpu_mn, mirror);
-	unsigned long start = update->start;
-	unsigned long end = update->end;
-	bool blockable = mmu_notifier_range_blockable(update);
-	struct interval_tree_node *it;
 
-	/* notification is exclusive, but interval is inclusive */
-	end -= 1;
-
-	if (amdgpu_mn_read_lock(amn, blockable))
-		return -EAGAIN;
-
-	it = interval_tree_iter_first(&amn->objects, start, end);
-	while (it) {
-		struct amdgpu_mn_node *node;
-		struct amdgpu_bo *bo;
-
-		if (!blockable) {
-			amdgpu_mn_read_unlock(amn);
-			return -EAGAIN;
-		}
-
-		node = container_of(it, struct amdgpu_mn_node, it);
-		it = interval_tree_iter_next(it, start, end);
-
-		list_for_each_entry(bo, &node->bos, mn_list) {
-			struct kgd_mem *mem = bo->kfd_bo;
-
-			if (amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm,
-							 start, end))
-				amdgpu_amdkfd_evict_userptr(mem, amn->mm);
-		}
-	}
-
-	amdgpu_mn_read_unlock(amn);
+	if (!mmu_notifier_range_blockable(update))
+		return false;
 
+	down_read(&amn->lock);
+	up_read(&amn->lock);
 	return 0;
 }
 
@@ -295,12 +164,10 @@ amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror,
 
 static struct hmm_mirror_ops amdgpu_hmm_mirror_ops[] = {
 	[AMDGPU_MN_TYPE_GFX] = {
-		.sync_cpu_device_pagetables = amdgpu_mn_sync_pagetables_gfx,
-		.release = amdgpu_hmm_mirror_release
+		.sync_cpu_device_pagetables = amdgpu_mn_sync_pagetables,
 	},
 	[AMDGPU_MN_TYPE_HSA] = {
-		.sync_cpu_device_pagetables = amdgpu_mn_sync_pagetables_hsa,
-		.release = amdgpu_hmm_mirror_release
+		.sync_cpu_device_pagetables = amdgpu_mn_sync_pagetables,
 	},
 };
 
@@ -327,7 +194,8 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev,
 	}
 
 	hash_for_each_possible(adev->mn_hash, amn, node, key)
-		if (AMDGPU_MN_KEY(amn->mm, amn->type) == key)
+		if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm,
+				  amn->type) == key)
 			goto release_locks;
 
 	amn = kzalloc(sizeof(*amn), GFP_KERNEL);
@@ -337,10 +205,8 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev,
 	}
 
 	amn->adev = adev;
-	amn->mm = mm;
 	init_rwsem(&amn->lock);
 	amn->type = type;
-	amn->objects = RB_ROOT_CACHED;
 
 	amn->mirror.ops = &amdgpu_hmm_mirror_ops[type];
 	r = hmm_mirror_register(&amn->mirror, mm);
@@ -369,100 +235,33 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev,
  * @bo: amdgpu buffer object
  * @addr: userptr addr we should monitor
  *
- * Registers an HMM mirror for the given BO at the specified address.
+ * Registers a mmu_notifier for the given BO at the specified address.
  * Returns 0 on success, -ERRNO if anything goes wrong.
  */
 int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
 {
-	unsigned long end = addr + amdgpu_bo_size(bo) - 1;
-	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	enum amdgpu_mn_type type =
-		bo->kfd_bo ? AMDGPU_MN_TYPE_HSA : AMDGPU_MN_TYPE_GFX;
-	struct amdgpu_mn *amn;
-	struct amdgpu_mn_node *node = NULL, *new_node;
-	struct list_head bos;
-	struct interval_tree_node *it;
-
-	amn = amdgpu_mn_get(adev, type);
-	if (IS_ERR(amn))
-		return PTR_ERR(amn);
-
-	new_node = kmalloc(sizeof(*new_node), GFP_KERNEL);
-	if (!new_node)
-		return -ENOMEM;
-
-	INIT_LIST_HEAD(&bos);
-
-	down_write(&amn->lock);
-
-	while ((it = interval_tree_iter_first(&amn->objects, addr, end))) {
-		kfree(node);
-		node = container_of(it, struct amdgpu_mn_node, it);
-		interval_tree_remove(&node->it, &amn->objects);
-		addr = min(it->start, addr);
-		end = max(it->last, end);
-		list_splice(&node->bos, &bos);
-	}
-
-	if (!node)
-		node = new_node;
+	if (bo->kfd_bo)
+		bo->notifier.ops = &amdgpu_mn_hsa_ops;
 	else
-		kfree(new_node);
-
-	bo->mn = amn;
-
-	node->it.start = addr;
-	node->it.last = end;
-	INIT_LIST_HEAD(&node->bos);
-	list_splice(&bos, &node->bos);
-	list_add(&bo->mn_list, &node->bos);
+		bo->notifier.ops = &amdgpu_mn_gfx_ops;
 
-	interval_tree_insert(&node->it, &amn->objects);
-
-	up_write(&amn->lock);
-
-	return 0;
+	return mmu_range_notifier_insert(&bo->notifier, addr,
+					 amdgpu_bo_size(bo), current->mm);
 }
 
 /**
- * amdgpu_mn_unregister - unregister a BO for HMM mirror updates
+ * amdgpu_mn_unregister - unregister a BO for notifier updates
  *
  * @bo: amdgpu buffer object
  *
- * Remove any registration of HMM mirror updates from the buffer object.
+ * Remove any registration of mmu notifier updates from the buffer object.
  */
 void amdgpu_mn_unregister(struct amdgpu_bo *bo)
 {
-	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	struct amdgpu_mn *amn;
-	struct list_head *head;
-
-	mutex_lock(&adev->mn_lock);
-
-	amn = bo->mn;
-	if (amn == NULL) {
-		mutex_unlock(&adev->mn_lock);
+	if (!bo->notifier.mm)
 		return;
-	}
-
-	down_write(&amn->lock);
-
-	/* save the next list entry for later */
-	head = bo->mn_list.next;
-
-	bo->mn = NULL;
-	list_del_init(&bo->mn_list);
-
-	if (list_empty(head)) {
-		struct amdgpu_mn_node *node;
-
-		node = container_of(head, struct amdgpu_mn_node, bos);
-		interval_tree_remove(&node->it, &amn->objects);
-		kfree(node);
-	}
-
-	up_write(&amn->lock);
-	mutex_unlock(&adev->mn_lock);
+	mmu_range_notifier_remove(&bo->notifier);
+	bo->notifier.mm = NULL;
 }
 
 /* flags used by HMM internal, not related to CPU/GPU PTE flags */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
index b8ed68943625c2..d73ab2947b22b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
@@ -39,12 +39,10 @@ enum amdgpu_mn_type {
  * struct amdgpu_mn
  *
  * @adev: amdgpu device pointer
- * @mm: process address space
  * @type: type of MMU notifier
  * @work: destruction work item
  * @node: hash table node to find structure by adev and mn
  * @lock: rw semaphore protecting the notifier nodes
- * @objects: interval tree containing amdgpu_mn_nodes
  * @mirror: HMM mirror function support
  *
  * Data for each amdgpu device and process address space.
@@ -52,7 +50,6 @@ enum amdgpu_mn_type {
 struct amdgpu_mn {
 	/* constant after initialisation */
 	struct amdgpu_device	*adev;
-	struct mm_struct	*mm;
 	enum amdgpu_mn_type	type;
 
 	/* only used on destruction */
@@ -63,7 +60,6 @@ struct amdgpu_mn {
 
 	/* objects protected by lock */
 	struct rw_semaphore	lock;
-	struct rb_root_cached	objects;
 
 #ifdef CONFIG_HMM_MIRROR
 	/* HMM mirror */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 658f4c9779b704..4b44ab850f94c2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -30,6 +30,9 @@
 
 #include <drm/amdgpu_drm.h>
 #include "amdgpu.h"
+#ifdef CONFIG_MMU_NOTIFIER
+#include <linux/mmu_notifier.h>
+#endif
 
 #define AMDGPU_BO_INVALID_OFFSET	LONG_MAX
 #define AMDGPU_BO_MAX_PLACEMENTS	3
@@ -100,10 +103,12 @@ struct amdgpu_bo {
 	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	struct amdgpu_mn		*mn;
 
-	union {
-		struct list_head	mn_list;
-		struct list_head	shadow_list;
-	};
+
+#ifdef CONFIG_MMU_NOTIFIER
+	struct mmu_range_notifier	notifier;
+#endif
+
+	struct list_head		shadow_list;
 
 	struct kgd_mem                  *kfd_bo;
 };
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (12 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-15 18:12 ` [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related Jason Gunthorpe
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Alex Deucher, Christian König,
	David Zhou

From: Jason Gunthorpe <jgg@mellanox.com>

Convert the collision-retry lock around hmm_range_fault to use the one now
provided by the mmu_range notifier.

Although this driver does not seem to use the collision retry lock that
hmm provides correctly, it can still be converted over to use the
mmu_range_notifier api instead of hmm_mirror without too much trouble.

This also deletes another place where a driver is associating additional
data (struct amdgpu_mn) with a mmu_struct.

Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: David (ChunMing) Zhou <David1.Zhou@amd.com>
Cc: amd-gfx@lists.freedesktop.org
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   4 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        | 130 ------------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |  49 -------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |  76 +++++-----
 5 files changed, 50 insertions(+), 223 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 47700302a08b7f..1bcedb9b477dce 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1738,6 +1738,10 @@ static int update_invalid_user_pages(struct amdkfd_process_info *process_info,
 			return ret;
 		}
 
+		/*
+		 * FIXME: Cannot ignore the return code, must hold
+		 * notifier_lock
+		 */
 		amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm);
 
 		/* Mark the BO as valid unless it was invalidated
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 2e53feed40e230..76771f5f0b60ab 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -607,8 +607,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 		e->tv.num_shared = 2;
 
 	amdgpu_bo_list_get_list(p->bo_list, &p->validated);
-	if (p->bo_list->first_userptr != p->bo_list->num_entries)
-		p->mn = amdgpu_mn_get(p->adev, AMDGPU_MN_TYPE_GFX);
 
 	INIT_LIST_HEAD(&duplicates);
 	amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
@@ -1291,11 +1289,11 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	if (r)
 		goto error_unlock;
 
-	/* No memory allocation is allowed while holding the mn lock.
-	 * p->mn is hold until amdgpu_cs_submit is finished and fence is added
-	 * to BOs.
+	/* No memory allocation is allowed while holding the notifier lock.
+	 * The lock is held until amdgpu_cs_submit is finished and fence is
+	 * added to BOs.
 	 */
-	amdgpu_mn_lock(p->mn);
+	mutex_lock(&p->adev->notifier_lock);
 
 	/* If userptr are invalidated after amdgpu_cs_parser_bos(), return
 	 * -EAGAIN, drmIoctl in libdrm will restart the amdgpu_cs_ioctl.
@@ -1338,13 +1336,13 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
 
 	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
-	amdgpu_mn_unlock(p->mn);
+	mutex_unlock(&p->adev->notifier_lock);
 
 	return 0;
 
 error_abort:
 	drm_sched_job_cleanup(&job->base);
-	amdgpu_mn_unlock(p->mn);
+	mutex_unlock(&p->adev->notifier_lock);
 
 error_unlock:
 	amdgpu_job_free(job);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 4ffd7b90f4d907..cf1efe7ffc3a83 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -50,28 +50,6 @@
 #include "amdgpu.h"
 #include "amdgpu_amdkfd.h"
 
-/**
- * amdgpu_mn_lock - take the write side lock for this notifier
- *
- * @mn: our notifier
- */
-void amdgpu_mn_lock(struct amdgpu_mn *mn)
-{
-	if (mn)
-		down_write(&mn->lock);
-}
-
-/**
- * amdgpu_mn_unlock - drop the write side lock for this notifier
- *
- * @mn: our notifier
- */
-void amdgpu_mn_unlock(struct amdgpu_mn *mn)
-{
-	if (mn)
-		up_write(&mn->lock);
-}
-
 /**
  * amdgpu_mn_invalidate_gfx - callback to notify about mm change
  *
@@ -143,92 +121,6 @@ static const struct mmu_range_notifier_ops amdgpu_mn_hsa_ops = {
 	.invalidate = amdgpu_mn_invalidate_hsa,
 };
 
-static int amdgpu_mn_sync_pagetables(struct hmm_mirror *mirror,
-				     const struct mmu_notifier_range *update)
-{
-	struct amdgpu_mn *amn = container_of(mirror, struct amdgpu_mn, mirror);
-
-	if (!mmu_notifier_range_blockable(update))
-		return false;
-
-	down_read(&amn->lock);
-	up_read(&amn->lock);
-	return 0;
-}
-
-/* Low bits of any reasonable mm pointer will be unused due to struct
- * alignment. Use these bits to make a unique key from the mm pointer
- * and notifier type.
- */
-#define AMDGPU_MN_KEY(mm, type) ((unsigned long)(mm) + (type))
-
-static struct hmm_mirror_ops amdgpu_hmm_mirror_ops[] = {
-	[AMDGPU_MN_TYPE_GFX] = {
-		.sync_cpu_device_pagetables = amdgpu_mn_sync_pagetables,
-	},
-	[AMDGPU_MN_TYPE_HSA] = {
-		.sync_cpu_device_pagetables = amdgpu_mn_sync_pagetables,
-	},
-};
-
-/**
- * amdgpu_mn_get - create HMM mirror context
- *
- * @adev: amdgpu device pointer
- * @type: type of MMU notifier context
- *
- * Creates a HMM mirror context for current->mm.
- */
-struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev,
-				enum amdgpu_mn_type type)
-{
-	struct mm_struct *mm = current->mm;
-	struct amdgpu_mn *amn;
-	unsigned long key = AMDGPU_MN_KEY(mm, type);
-	int r;
-
-	mutex_lock(&adev->mn_lock);
-	if (down_write_killable(&mm->mmap_sem)) {
-		mutex_unlock(&adev->mn_lock);
-		return ERR_PTR(-EINTR);
-	}
-
-	hash_for_each_possible(adev->mn_hash, amn, node, key)
-		if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm,
-				  amn->type) == key)
-			goto release_locks;
-
-	amn = kzalloc(sizeof(*amn), GFP_KERNEL);
-	if (!amn) {
-		amn = ERR_PTR(-ENOMEM);
-		goto release_locks;
-	}
-
-	amn->adev = adev;
-	init_rwsem(&amn->lock);
-	amn->type = type;
-
-	amn->mirror.ops = &amdgpu_hmm_mirror_ops[type];
-	r = hmm_mirror_register(&amn->mirror, mm);
-	if (r)
-		goto free_amn;
-
-	hash_add(adev->mn_hash, &amn->node, AMDGPU_MN_KEY(mm, type));
-
-release_locks:
-	up_write(&mm->mmap_sem);
-	mutex_unlock(&adev->mn_lock);
-
-	return amn;
-
-free_amn:
-	up_write(&mm->mmap_sem);
-	mutex_unlock(&adev->mn_lock);
-	kfree(amn);
-
-	return ERR_PTR(r);
-}
-
 /**
  * amdgpu_mn_register - register a BO for notifier updates
  *
@@ -263,25 +155,3 @@ void amdgpu_mn_unregister(struct amdgpu_bo *bo)
 	mmu_range_notifier_remove(&bo->notifier);
 	bo->notifier.mm = NULL;
 }
-
-/* flags used by HMM internal, not related to CPU/GPU PTE flags */
-static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] = {
-		(1 << 0), /* HMM_PFN_VALID */
-		(1 << 1), /* HMM_PFN_WRITE */
-		0 /* HMM_PFN_DEVICE_PRIVATE */
-};
-
-static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] = {
-		0xfffffffffffffffeUL, /* HMM_PFN_ERROR */
-		0, /* HMM_PFN_NONE */
-		0xfffffffffffffffcUL /* HMM_PFN_SPECIAL */
-};
-
-void amdgpu_hmm_init_range(struct hmm_range *range)
-{
-	if (range) {
-		range->flags = hmm_range_flags;
-		range->values = hmm_range_values;
-		range->pfn_shift = PAGE_SHIFT;
-	}
-}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
index d73ab2947b22b2..a292238f75ebae 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
@@ -30,59 +30,10 @@
 #include <linux/workqueue.h>
 #include <linux/interval_tree.h>
 
-enum amdgpu_mn_type {
-	AMDGPU_MN_TYPE_GFX,
-	AMDGPU_MN_TYPE_HSA,
-};
-
-/**
- * struct amdgpu_mn
- *
- * @adev: amdgpu device pointer
- * @type: type of MMU notifier
- * @work: destruction work item
- * @node: hash table node to find structure by adev and mn
- * @lock: rw semaphore protecting the notifier nodes
- * @mirror: HMM mirror function support
- *
- * Data for each amdgpu device and process address space.
- */
-struct amdgpu_mn {
-	/* constant after initialisation */
-	struct amdgpu_device	*adev;
-	enum amdgpu_mn_type	type;
-
-	/* only used on destruction */
-	struct work_struct	work;
-
-	/* protected by adev->mn_lock */
-	struct hlist_node	node;
-
-	/* objects protected by lock */
-	struct rw_semaphore	lock;
-
-#ifdef CONFIG_HMM_MIRROR
-	/* HMM mirror */
-	struct hmm_mirror	mirror;
-#endif
-};
-
 #if defined(CONFIG_HMM_MIRROR)
-void amdgpu_mn_lock(struct amdgpu_mn *mn);
-void amdgpu_mn_unlock(struct amdgpu_mn *mn);
-struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev,
-				enum amdgpu_mn_type type);
 int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr);
 void amdgpu_mn_unregister(struct amdgpu_bo *bo);
-void amdgpu_hmm_init_range(struct hmm_range *range);
 #else
-static inline void amdgpu_mn_lock(struct amdgpu_mn *mn) {}
-static inline void amdgpu_mn_unlock(struct amdgpu_mn *mn) {}
-static inline struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev,
-					      enum amdgpu_mn_type type)
-{
-	return NULL;
-}
 static inline int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
 {
 	DRM_WARN_ONCE("HMM_MIRROR kernel config option is not enabled, "
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c0e41f1f0c2365..65d9824b54f2a9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -773,6 +773,20 @@ struct amdgpu_ttm_tt {
 #endif
 };
 
+#ifdef CONFIG_DRM_AMDGPU_USERPTR
+/* flags used by HMM internal, not related to CPU/GPU PTE flags */
+static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] = {
+	(1 << 0), /* HMM_PFN_VALID */
+	(1 << 1), /* HMM_PFN_WRITE */
+	0 /* HMM_PFN_DEVICE_PRIVATE */
+};
+
+static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] = {
+	0xfffffffffffffffeUL, /* HMM_PFN_ERROR */
+	0, /* HMM_PFN_NONE */
+	0xfffffffffffffffcUL /* HMM_PFN_SPECIAL */
+};
+
 /**
  * amdgpu_ttm_tt_get_user_pages - get device accessible pages that back user
  * memory and start HMM tracking CPU page table update
@@ -780,29 +794,27 @@ struct amdgpu_ttm_tt {
  * Calling function must call amdgpu_ttm_tt_userptr_range_done() once and only
  * once afterwards to stop HMM tracking
  */
-#if IS_ENABLED(CONFIG_DRM_AMDGPU_USERPTR)
-
-#define MAX_RETRY_HMM_RANGE_FAULT	16
-
 int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 {
-	struct hmm_mirror *mirror = bo->mn ? &bo->mn->mirror : NULL;
 	struct ttm_tt *ttm = bo->tbo.ttm;
 	struct amdgpu_ttm_tt *gtt = (void *)ttm;
 	struct mm_struct *mm;
+	struct hmm_range *range;
 	unsigned long start = gtt->userptr;
 	struct vm_area_struct *vma;
-	struct hmm_range *range;
 	unsigned long i;
-	uint64_t *pfns;
 	int r = 0;
 
-	if (unlikely(!mirror)) {
-		DRM_DEBUG_DRIVER("Failed to get hmm_mirror\n");
+	mm = bo->notifier.mm;
+	if (unlikely(!mm)) {
+		DRM_DEBUG_DRIVER("BO is not registered?\n");
 		return -EFAULT;
 	}
 
-	mm = mirror->hmm->mmu_notifier.mm;
+	/* Another get_user_pages is running at the same time?? */
+	if (WARN_ON(gtt->range))
+		return -EFAULT;
+
 	if (!mmget_not_zero(mm)) /* Happens during process shutdown */
 		return -ESRCH;
 
@@ -811,30 +823,24 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 		r = -ENOMEM;
 		goto out;
 	}
+	range->notifier = &bo->notifier;
+	range->flags = hmm_range_flags;
+	range->values = hmm_range_values;
+	range->pfn_shift = PAGE_SHIFT;
+	range->start = bo->notifier.interval_tree.start;
+	range->end = bo->notifier.interval_tree.last + 1;
+	range->default_flags = hmm_range_flags[HMM_PFN_VALID];
+	if (!amdgpu_ttm_tt_is_readonly(ttm))
+		range->default_flags |= range->flags[HMM_PFN_WRITE];
 
-	pfns = kvmalloc_array(ttm->num_pages, sizeof(*pfns), GFP_KERNEL);
-	if (unlikely(!pfns)) {
+	range->pfns = kvmalloc_array(ttm->num_pages, sizeof(*range->pfns),
+				     GFP_KERNEL);
+	if (unlikely(!range->pfns)) {
 		r = -ENOMEM;
 		goto out_free_ranges;
 	}
 
-	amdgpu_hmm_init_range(range);
-	range->default_flags = range->flags[HMM_PFN_VALID];
-	range->default_flags |= amdgpu_ttm_tt_is_readonly(ttm) ?
-				0 : range->flags[HMM_PFN_WRITE];
-	range->pfn_flags_mask = 0;
-	range->pfns = pfns;
-	range->start = start;
-	range->end = start + ttm->num_pages * PAGE_SIZE;
-
-	hmm_range_register(range, mirror);
-
-	/*
-	 * Just wait for range to be valid, safe to ignore return value as we
-	 * will use the return value of hmm_range_fault() below under the
-	 * mmap_sem to ascertain the validity of the range.
-	 */
-	hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT);
+	range->notifier_seq = mmu_range_read_begin(&bo->notifier);
 
 	down_read(&mm->mmap_sem);
 	vma = find_vma(mm, start);
@@ -855,10 +861,10 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 		goto out_free_pfns;
 
 	for (i = 0; i < ttm->num_pages; i++) {
-		pages[i] = hmm_device_entry_to_page(range, pfns[i]);
+		pages[i] = hmm_device_entry_to_page(range, range->pfns[i]);
 		if (unlikely(!pages[i])) {
 			pr_err("Page fault failed for pfn[%lu] = 0x%llx\n",
-			       i, pfns[i]);
+			       i, range->pfns[i]);
 			r = -ENOMEM;
 
 			goto out_free_pfns;
@@ -873,8 +879,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
 out_unlock:
 	up_read(&mm->mmap_sem);
 out_free_pfns:
-	hmm_range_unregister(range);
-	kvfree(pfns);
+	kvfree(range->pfns);
 out_free_ranges:
 	kfree(range);
 out:
@@ -903,9 +908,8 @@ bool amdgpu_ttm_tt_get_user_pages_done(struct ttm_tt *ttm)
 		"No user pages to check\n");
 
 	if (gtt->range) {
-		r = hmm_range_valid(gtt->range);
-		hmm_range_unregister(gtt->range);
-
+		r = mmu_range_read_retry(gtt->range->notifier,
+					 gtt->range->notifier_seq);
 		kvfree(gtt->range->pfns);
 		kfree(gtt->range);
 		gtt->range = NULL;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (13 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 14/15] drm/amdgpu: Use mmu_range_notifier " Jason Gunthorpe
@ 2019-10-15 18:12 ` Jason Gunthorpe
  2019-10-21 18:38   ` Jerome Glisse
  2019-10-16  8:58 ` [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Christian König
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-15 18:12 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

From: Jason Gunthorpe <jgg@mellanox.com>

The only two users of this are now converted to use mmu_range_notifier,
delete all the code and update hmm.rst.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 Documentation/vm/hmm.rst | 105 ++++-----------
 include/linux/hmm.h      | 183 +------------------------
 mm/Kconfig               |   1 -
 mm/hmm.c                 | 284 +--------------------------------------
 4 files changed, 33 insertions(+), 540 deletions(-)

diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 0a5960beccf76d..a247643035c4e2 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -147,49 +147,16 @@ Address space mirroring implementation and API
 Address space mirroring's main objective is to allow duplication of a range of
 CPU page table into a device page table; HMM helps keep both synchronized. A
 device driver that wants to mirror a process address space must start with the
-registration of an hmm_mirror struct::
-
- int hmm_mirror_register(struct hmm_mirror *mirror,
-                         struct mm_struct *mm);
-
-The mirror struct has a set of callbacks that are used
-to propagate CPU page tables::
-
- struct hmm_mirror_ops {
-     /* release() - release hmm_mirror
-      *
-      * @mirror: pointer to struct hmm_mirror
-      *
-      * This is called when the mm_struct is being released.  The callback
-      * must ensure that all access to any pages obtained from this mirror
-      * is halted before the callback returns. All future access should
-      * fault.
-      */
-     void (*release)(struct hmm_mirror *mirror);
-
-     /* sync_cpu_device_pagetables() - synchronize page tables
-      *
-      * @mirror: pointer to struct hmm_mirror
-      * @update: update information (see struct mmu_notifier_range)
-      * Return: -EAGAIN if update.blockable false and callback need to
-      *         block, 0 otherwise.
-      *
-      * This callback ultimately originates from mmu_notifiers when the CPU
-      * page table is updated. The device driver must update its page table
-      * in response to this callback. The update argument tells what action
-      * to perform.
-      *
-      * The device driver must not return from this callback until the device
-      * page tables are completely updated (TLBs flushed, etc); this is a
-      * synchronous call.
-      */
-     int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror,
-                                       const struct hmm_update *update);
- };
-
-The device driver must perform the update action to the range (mark range
-read only, or fully unmap, etc.). The device must complete the update before
-the driver callback returns.
+registration of a mmu_range_notifier::
+
+ mrn->ops = &driver_ops;
+ int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
+			      unsigned long start, unsigned long length,
+			      struct mm_struct *mm);
+
+During the driver_ops->invalidate() callback the device driver must perform
+the update action to the range (mark range read only, or fully unmap,
+etc.). The device must complete the update before the driver callback returns.
 
 When the device driver wants to populate a range of virtual addresses, it can
 use::
@@ -216,70 +183,46 @@ The usage pattern is::
       struct hmm_range range;
       ...
 
+      range.notifier = &mrn;
       range.start = ...;
       range.end = ...;
       range.pfns = ...;
       range.flags = ...;
       range.values = ...;
       range.pfn_shift = ...;
-      hmm_range_register(&range, mirror);
 
-      /*
-       * Just wait for range to be valid, safe to ignore return value as we
-       * will use the return value of hmm_range_fault() below under the
-       * mmap_sem to ascertain the validity of the range.
-       */
-      hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
+      if (!mmget_not_zero(mrn->notifier.mm))
+          return -EFAULT;
 
  again:
+      range.notifier_seq = mmu_range_read_begin(&mrn);
       down_read(&mm->mmap_sem);
       ret = hmm_range_fault(&range, HMM_RANGE_SNAPSHOT);
       if (ret) {
           up_read(&mm->mmap_sem);
-          if (ret == -EBUSY) {
-            /*
-             * No need to check hmm_range_wait_until_valid() return value
-             * on retry we will get proper error with hmm_range_fault()
-             */
-            hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
-            goto again;
-          }
-          hmm_range_unregister(&range);
+          if (ret == -EBUSY)
+                 goto again;
           return ret;
       }
+      up_read(&mm->mmap_sem);
+
       take_lock(driver->update);
-      if (!hmm_range_valid(&range)) {
+      if (mmu_range_read_retry(&mrn, range.notifier_seq) {
           release_lock(driver->update);
-          up_read(&mm->mmap_sem);
           goto again;
       }
 
-      // Use pfns array content to update device page table
+      /* Use pfns array content to update device page table,
+       * under the update lock */
 
-      hmm_range_unregister(&range);
       release_lock(driver->update);
-      up_read(&mm->mmap_sem);
       return 0;
  }
 
 The driver->update lock is the same lock that the driver takes inside its
-sync_cpu_device_pagetables() callback. That lock must be held before calling
-hmm_range_valid() to avoid any race with a concurrent CPU page table update.
-
-HMM implements all this on top of the mmu_notifier API because we wanted a
-simpler API and also to be able to perform optimizations latter on like doing
-concurrent device updates in multi-devices scenario.
-
-HMM also serves as an impedance mismatch between how CPU page table updates
-are done (by CPU write to the page table and TLB flushes) and how devices
-update their own page table. Device updates are a multi-step process. First,
-appropriate commands are written to a buffer, then this buffer is scheduled for
-execution on the device. It is only once the device has executed commands in
-the buffer that the update is done. Creating and scheduling the update command
-buffer can happen concurrently for multiple devices. Waiting for each device to
-report commands as executed is serialized (there is no point in doing this
-concurrently).
-
+invalidate() callback. That lock must be held before calling
+mmu_range_read_retry() to avoid any race with a concurrent CPU page table
+update.
 
 Leverage default_flags and pfn_flags_mask
 =========================================
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 2666eb08a40615..b4af5173523232 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -68,29 +68,6 @@
 #include <linux/completion.h>
 #include <linux/mmu_notifier.h>
 
-
-/*
- * struct hmm - HMM per mm struct
- *
- * @mm: mm struct this HMM struct is bound to
- * @lock: lock protecting ranges list
- * @ranges: list of range being snapshotted
- * @mirrors: list of mirrors for this mm
- * @mmu_notifier: mmu notifier to track updates to CPU page table
- * @mirrors_sem: read/write semaphore protecting the mirrors list
- * @wq: wait queue for user waiting on a range invalidation
- * @notifiers: count of active mmu notifiers
- */
-struct hmm {
-	struct mmu_notifier	mmu_notifier;
-	spinlock_t		ranges_lock;
-	struct list_head	ranges;
-	struct list_head	mirrors;
-	struct rw_semaphore	mirrors_sem;
-	wait_queue_head_t	wq;
-	long			notifiers;
-};
-
 /*
  * hmm_pfn_flag_e - HMM flag enums
  *
@@ -143,9 +120,8 @@ enum hmm_pfn_value_e {
 /*
  * struct hmm_range - track invalidation lock on virtual address range
  *
- * @notifier: an optional mmu_range_notifier
- * @notifier_seq: when notifier is used this is the result of
- *                mmu_range_read_begin()
+ * @notifier: a mmu_range_notifier that includes the start/end
+ * @notifier_seq: result of mmu_range_read_begin()
  * @hmm: the core HMM structure this range is active against
  * @vma: the vm area struct for the range
  * @list: all range lock are on a list
@@ -162,8 +138,6 @@ enum hmm_pfn_value_e {
 struct hmm_range {
 	struct mmu_range_notifier *notifier;
 	unsigned long		notifier_seq;
-	struct hmm		*hmm;
-	struct list_head	list;
 	unsigned long		start;
 	unsigned long		end;
 	uint64_t		*pfns;
@@ -172,32 +146,8 @@ struct hmm_range {
 	uint64_t		default_flags;
 	uint64_t		pfn_flags_mask;
 	uint8_t			pfn_shift;
-	bool			valid;
 };
 
-/*
- * hmm_range_wait_until_valid() - wait for range to be valid
- * @range: range affected by invalidation to wait on
- * @timeout: time out for wait in ms (ie abort wait after that period of time)
- * Return: true if the range is valid, false otherwise.
- */
-static inline bool hmm_range_wait_until_valid(struct hmm_range *range,
-					      unsigned long timeout)
-{
-	return wait_event_timeout(range->hmm->wq, range->valid,
-				  msecs_to_jiffies(timeout)) != 0;
-}
-
-/*
- * hmm_range_valid() - test if a range is valid or not
- * @range: range
- * Return: true if the range is valid, false otherwise.
- */
-static inline bool hmm_range_valid(struct hmm_range *range)
-{
-	return range->valid;
-}
-
 /*
  * hmm_device_entry_to_page() - return struct page pointed to by a device entry
  * @range: range use to decode device entry value
@@ -267,111 +217,6 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range,
 		range->flags[HMM_PFN_VALID];
 }
 
-/*
- * Mirroring: how to synchronize device page table with CPU page table.
- *
- * A device driver that is participating in HMM mirroring must always
- * synchronize with CPU page table updates. For this, device drivers can either
- * directly use mmu_notifier APIs or they can use the hmm_mirror API. Device
- * drivers can decide to register one mirror per device per process, or just
- * one mirror per process for a group of devices. The pattern is:
- *
- *      int device_bind_address_space(..., struct mm_struct *mm, ...)
- *      {
- *          struct device_address_space *das;
- *
- *          // Device driver specific initialization, and allocation of das
- *          // which contains an hmm_mirror struct as one of its fields.
- *          ...
- *
- *          ret = hmm_mirror_register(&das->mirror, mm, &device_mirror_ops);
- *          if (ret) {
- *              // Cleanup on error
- *              return ret;
- *          }
- *
- *          // Other device driver specific initialization
- *          ...
- *      }
- *
- * Once an hmm_mirror is registered for an address space, the device driver
- * will get callbacks through sync_cpu_device_pagetables() operation (see
- * hmm_mirror_ops struct).
- *
- * Device driver must not free the struct containing the hmm_mirror struct
- * before calling hmm_mirror_unregister(). The expected usage is to do that when
- * the device driver is unbinding from an address space.
- *
- *
- *      void device_unbind_address_space(struct device_address_space *das)
- *      {
- *          // Device driver specific cleanup
- *          ...
- *
- *          hmm_mirror_unregister(&das->mirror);
- *
- *          // Other device driver specific cleanup, and now das can be freed
- *          ...
- *      }
- */
-
-struct hmm_mirror;
-
-/*
- * struct hmm_mirror_ops - HMM mirror device operations callback
- *
- * @update: callback to update range on a device
- */
-struct hmm_mirror_ops {
-	/* release() - release hmm_mirror
-	 *
-	 * @mirror: pointer to struct hmm_mirror
-	 *
-	 * This is called when the mm_struct is being released.  The callback
-	 * must ensure that all access to any pages obtained from this mirror
-	 * is halted before the callback returns. All future access should
-	 * fault.
-	 */
-	void (*release)(struct hmm_mirror *mirror);
-
-	/* sync_cpu_device_pagetables() - synchronize page tables
-	 *
-	 * @mirror: pointer to struct hmm_mirror
-	 * @update: update information (see struct mmu_notifier_range)
-	 * Return: -EAGAIN if mmu_notifier_range_blockable(update) is false
-	 * and callback needs to block, 0 otherwise.
-	 *
-	 * This callback ultimately originates from mmu_notifiers when the CPU
-	 * page table is updated. The device driver must update its page table
-	 * in response to this callback. The update argument tells what action
-	 * to perform.
-	 *
-	 * The device driver must not return from this callback until the device
-	 * page tables are completely updated (TLBs flushed, etc); this is a
-	 * synchronous call.
-	 */
-	int (*sync_cpu_device_pagetables)(
-		struct hmm_mirror *mirror,
-		const struct mmu_notifier_range *update);
-};
-
-/*
- * struct hmm_mirror - mirror struct for a device driver
- *
- * @hmm: pointer to struct hmm (which is unique per mm_struct)
- * @ops: device driver callback for HMM mirror operations
- * @list: for list of mirrors of a given mm
- *
- * Each address space (mm_struct) being mirrored by a device must register one
- * instance of an hmm_mirror struct with HMM. HMM will track the list of all
- * mirrors for each mm_struct.
- */
-struct hmm_mirror {
-	struct hmm			*hmm;
-	const struct hmm_mirror_ops	*ops;
-	struct list_head		list;
-};
-
 /*
  * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
  */
@@ -381,15 +226,9 @@ struct hmm_mirror {
 #define HMM_FAULT_SNAPSHOT		(1 << 1)
 
 #ifdef CONFIG_HMM_MIRROR
-int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm);
-void hmm_mirror_unregister(struct hmm_mirror *mirror);
-
 /*
  * Please see Documentation/vm/hmm.rst for how to use the range API.
  */
-int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror);
-void hmm_range_unregister(struct hmm_range *range);
-
 long hmm_range_fault(struct hmm_range *range, unsigned int flags);
 
 long hmm_range_dma_map(struct hmm_range *range,
@@ -401,24 +240,6 @@ long hmm_range_dma_unmap(struct hmm_range *range,
 			 dma_addr_t *daddrs,
 			 bool dirty);
 #else
-int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm)
-{
-	return -EOPNOTSUPP;
-}
-
-void hmm_mirror_unregister(struct hmm_mirror *mirror)
-{
-}
-
-int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror)
-{
-	return -EOPNOTSUPP;
-}
-
-void hmm_range_unregister(struct hmm_range *range)
-{
-}
-
 static inline long hmm_range_fault(struct hmm_range *range, unsigned int flags)
 {
 	return -EOPNOTSUPP;
diff --git a/mm/Kconfig b/mm/Kconfig
index d0b5046d9aeffd..e38ff1d5968dbf 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -675,7 +675,6 @@ config DEV_PAGEMAP_OPS
 config HMM_MIRROR
 	bool
 	depends on MMU
-	depends on MMU_NOTIFIER
 
 config DEVICE_PRIVATE
 	bool "Unaddressable device memory (GPU memory, ...)"
diff --git a/mm/hmm.c b/mm/hmm.c
index 22ac3595771feb..75d15a820e182e 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -26,193 +26,6 @@
 #include <linux/mmu_notifier.h>
 #include <linux/memory_hotplug.h>
 
-static struct mmu_notifier *hmm_alloc_notifier(struct mm_struct *mm)
-{
-	struct hmm *hmm;
-
-	hmm = kzalloc(sizeof(*hmm), GFP_KERNEL);
-	if (!hmm)
-		return ERR_PTR(-ENOMEM);
-
-	init_waitqueue_head(&hmm->wq);
-	INIT_LIST_HEAD(&hmm->mirrors);
-	init_rwsem(&hmm->mirrors_sem);
-	INIT_LIST_HEAD(&hmm->ranges);
-	spin_lock_init(&hmm->ranges_lock);
-	hmm->notifiers = 0;
-	return &hmm->mmu_notifier;
-}
-
-static void hmm_free_notifier(struct mmu_notifier *mn)
-{
-	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
-
-	WARN_ON(!list_empty(&hmm->ranges));
-	WARN_ON(!list_empty(&hmm->mirrors));
-	kfree(hmm);
-}
-
-static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
-{
-	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
-	struct hmm_mirror *mirror;
-
-	/*
-	 * Since hmm_range_register() holds the mmget() lock hmm_release() is
-	 * prevented as long as a range exists.
-	 */
-	WARN_ON(!list_empty_careful(&hmm->ranges));
-
-	down_read(&hmm->mirrors_sem);
-	list_for_each_entry(mirror, &hmm->mirrors, list) {
-		/*
-		 * Note: The driver is not allowed to trigger
-		 * hmm_mirror_unregister() from this thread.
-		 */
-		if (mirror->ops->release)
-			mirror->ops->release(mirror);
-	}
-	up_read(&hmm->mirrors_sem);
-}
-
-static void notifiers_decrement(struct hmm *hmm)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&hmm->ranges_lock, flags);
-	hmm->notifiers--;
-	if (!hmm->notifiers) {
-		struct hmm_range *range;
-
-		list_for_each_entry(range, &hmm->ranges, list) {
-			if (range->valid)
-				continue;
-			range->valid = true;
-		}
-		wake_up_all(&hmm->wq);
-	}
-	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
-}
-
-static int hmm_invalidate_range_start(struct mmu_notifier *mn,
-			const struct mmu_notifier_range *nrange)
-{
-	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
-	struct hmm_mirror *mirror;
-	struct hmm_range *range;
-	unsigned long flags;
-	int ret = 0;
-
-	spin_lock_irqsave(&hmm->ranges_lock, flags);
-	hmm->notifiers++;
-	list_for_each_entry(range, &hmm->ranges, list) {
-		if (nrange->end < range->start || nrange->start >= range->end)
-			continue;
-
-		range->valid = false;
-	}
-	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
-
-	if (mmu_notifier_range_blockable(nrange))
-		down_read(&hmm->mirrors_sem);
-	else if (!down_read_trylock(&hmm->mirrors_sem)) {
-		ret = -EAGAIN;
-		goto out;
-	}
-
-	list_for_each_entry(mirror, &hmm->mirrors, list) {
-		int rc;
-
-		rc = mirror->ops->sync_cpu_device_pagetables(mirror, nrange);
-		if (rc) {
-			if (WARN_ON(mmu_notifier_range_blockable(nrange) ||
-			    rc != -EAGAIN))
-				continue;
-			ret = -EAGAIN;
-			break;
-		}
-	}
-	up_read(&hmm->mirrors_sem);
-
-out:
-	if (ret)
-		notifiers_decrement(hmm);
-	return ret;
-}
-
-static void hmm_invalidate_range_end(struct mmu_notifier *mn,
-			const struct mmu_notifier_range *nrange)
-{
-	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
-
-	notifiers_decrement(hmm);
-}
-
-static const struct mmu_notifier_ops hmm_mmu_notifier_ops = {
-	.release		= hmm_release,
-	.invalidate_range_start	= hmm_invalidate_range_start,
-	.invalidate_range_end	= hmm_invalidate_range_end,
-	.alloc_notifier		= hmm_alloc_notifier,
-	.free_notifier		= hmm_free_notifier,
-};
-
-/*
- * hmm_mirror_register() - register a mirror against an mm
- *
- * @mirror: new mirror struct to register
- * @mm: mm to register against
- * Return: 0 on success, -ENOMEM if no memory, -EINVAL if invalid arguments
- *
- * To start mirroring a process address space, the device driver must register
- * an HMM mirror struct.
- *
- * The caller cannot unregister the hmm_mirror while any ranges are
- * registered.
- *
- * Callers using this function must put a call to mmu_notifier_synchronize()
- * in their module exit functions.
- */
-int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm)
-{
-	struct mmu_notifier *mn;
-
-	lockdep_assert_held_write(&mm->mmap_sem);
-
-	/* Sanity check */
-	if (!mm || !mirror || !mirror->ops)
-		return -EINVAL;
-
-	mn = mmu_notifier_get_locked(&hmm_mmu_notifier_ops, mm);
-	if (IS_ERR(mn))
-		return PTR_ERR(mn);
-	mirror->hmm = container_of(mn, struct hmm, mmu_notifier);
-
-	down_write(&mirror->hmm->mirrors_sem);
-	list_add(&mirror->list, &mirror->hmm->mirrors);
-	up_write(&mirror->hmm->mirrors_sem);
-
-	return 0;
-}
-EXPORT_SYMBOL(hmm_mirror_register);
-
-/*
- * hmm_mirror_unregister() - unregister a mirror
- *
- * @mirror: mirror struct to unregister
- *
- * Stop mirroring a process address space, and cleanup.
- */
-void hmm_mirror_unregister(struct hmm_mirror *mirror)
-{
-	struct hmm *hmm = mirror->hmm;
-
-	down_write(&hmm->mirrors_sem);
-	list_del(&mirror->list);
-	up_write(&hmm->mirrors_sem);
-	mmu_notifier_put(&hmm->mmu_notifier);
-}
-EXPORT_SYMBOL(hmm_mirror_unregister);
-
 struct hmm_vma_walk {
 	struct hmm_range	*range;
 	struct dev_pagemap	*pgmap;
@@ -779,87 +592,6 @@ static void hmm_pfns_clear(struct hmm_range *range,
 		*pfns = range->values[HMM_PFN_NONE];
 }
 
-/*
- * hmm_range_register() - start tracking change to CPU page table over a range
- * @range: range
- * @mm: the mm struct for the range of virtual address
- *
- * Return: 0 on success, -EFAULT if the address space is no longer valid
- *
- * Track updates to the CPU page table see include/linux/hmm.h
- */
-int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror)
-{
-	struct hmm *hmm = mirror->hmm;
-	unsigned long flags;
-
-	range->valid = false;
-	range->hmm = NULL;
-
-	if ((range->start & (PAGE_SIZE - 1)) || (range->end & (PAGE_SIZE - 1)))
-		return -EINVAL;
-	if (range->start >= range->end)
-		return -EINVAL;
-
-	/* Prevent hmm_release() from running while the range is valid */
-	if (!mmget_not_zero(hmm->mmu_notifier.mm))
-		return -EFAULT;
-
-	/* Initialize range to track CPU page table updates. */
-	spin_lock_irqsave(&hmm->ranges_lock, flags);
-
-	range->hmm = hmm;
-	list_add(&range->list, &hmm->ranges);
-
-	/*
-	 * If there are any concurrent notifiers we have to wait for them for
-	 * the range to be valid (see hmm_range_wait_until_valid()).
-	 */
-	if (!hmm->notifiers)
-		range->valid = true;
-	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
-
-	return 0;
-}
-EXPORT_SYMBOL(hmm_range_register);
-
-/*
- * hmm_range_unregister() - stop tracking change to CPU page table over a range
- * @range: range
- *
- * Range struct is used to track updates to the CPU page table after a call to
- * hmm_range_register(). See include/linux/hmm.h for how to use it.
- */
-void hmm_range_unregister(struct hmm_range *range)
-{
-	struct hmm *hmm = range->hmm;
-	unsigned long flags;
-
-	spin_lock_irqsave(&hmm->ranges_lock, flags);
-	list_del_init(&range->list);
-	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
-
-	/* Drop reference taken by hmm_range_register() */
-	mmput(hmm->mmu_notifier.mm);
-
-	/*
-	 * The range is now invalid and the ref on the hmm is dropped, so
-	 * poison the pointer.  Leave other fields in place, for the caller's
-	 * use.
-	 */
-	range->valid = false;
-	memset(&range->hmm, POISON_INUSE, sizeof(range->hmm));
-}
-EXPORT_SYMBOL(hmm_range_unregister);
-
-static bool needs_retry(struct hmm_range *range)
-{
-	if (range->notifier)
-		return mmu_range_check_retry(range->notifier,
-					     range->notifier_seq);
-	return !range->valid;
-}
-
 static const struct mm_walk_ops hmm_walk_ops = {
 	.pud_entry	= hmm_vma_walk_pud,
 	.pmd_entry	= hmm_vma_walk_pmd,
@@ -900,20 +632,15 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
 	const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP;
 	unsigned long start = range->start, end;
 	struct hmm_vma_walk hmm_vma_walk;
-	struct mm_struct *mm;
+	struct mm_struct *mm = range->notifier->mm;
 	struct vm_area_struct *vma;
 	int ret;
 
-	if (range->notifier)
-		mm = range->notifier->mm;
-	else
-		mm = range->hmm->mmu_notifier.mm;
-
 	lockdep_assert_held(&mm->mmap_sem);
 
 	do {
 		/* If range is no longer valid force retry. */
-		if (needs_retry(range))
+		if (mmu_range_check_retry(range->notifier, range->notifier_seq))
 			return -EBUSY;
 
 		vma = find_vma(mm, start);
@@ -946,7 +673,9 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
 			start = hmm_vma_walk.last;
 
 			/* Keep trying while the range is valid. */
-		} while (ret == -EBUSY && !needs_retry(range));
+		} while (ret == -EBUSY &&
+			 !mmu_range_check_retry(range->notifier,
+						range->notifier_seq));
 
 		if (ret) {
 			unsigned long i;
@@ -1004,7 +733,8 @@ long hmm_range_dma_map(struct hmm_range *range, struct device *device,
 			continue;
 
 		/* Check if range is being invalidated */
-		if (needs_retry(range)) {
+		if (mmu_range_check_retry(range->notifier,
+					  range->notifier_seq)) {
 			ret = -EBUSY;
 			goto unmap;
 		}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER
  2019-10-15 18:12 ` [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER Jason Gunthorpe
@ 2019-10-16  5:11   ` Jürgen Groß
  2019-10-16  6:35     ` Oleksandr Andrushchenko
  0 siblings, 1 reply; 58+ messages in thread
From: Jürgen Groß @ 2019-10-16  5:11 UTC (permalink / raw)
  To: Jason Gunthorpe, Felix.Kuehling, John Hubbard, Ralph Campbell,
	Jerome Glisse
  Cc: Oleksandr Andrushchenko, Stefano Stabellini, linux-mm, amd-gfx,
	dri-devel, xen-devel, Jason Gunthorpe, Boris Ostrovsky,
	Andrea Arcangeli, Ben Skeggs, linux-rdma

On 15.10.19 20:12, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> DMA_SHARED_BUFFER can not be enabled by the user (it represents a library
> set in the kernel). The kconfig convention is to use select for such
> symbols so they are turned on implicitly when the user enables a kconfig
> that needs them.
> 
> Otherwise the XEN_GNTDEV_DMABUF kconfig is overly difficult to enable.
> 
> Fixes: 932d6562179e ("xen/gntdev: Add initial support for dma-buf UAPI")
> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER
  2019-10-16  5:11   ` Jürgen Groß
@ 2019-10-16  6:35     ` Oleksandr Andrushchenko
  2019-10-21 19:12       ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Oleksandr Andrushchenko @ 2019-10-16  6:35 UTC (permalink / raw)
  To: Jürgen Groß,
	Jason Gunthorpe, Felix.Kuehling, John Hubbard, Ralph Campbell,
	Jerome Glisse
  Cc: Stefano Stabellini, linux-mm, amd-gfx, dri-devel, xen-devel,
	Jason Gunthorpe, Boris Ostrovsky, Andrea Arcangeli, Ben Skeggs,
	linux-rdma

On 10/16/19 8:11 AM, Jürgen Groß wrote:
> On 15.10.19 20:12, Jason Gunthorpe wrote:
>> From: Jason Gunthorpe <jgg@mellanox.com>
>>
>> DMA_SHARED_BUFFER can not be enabled by the user (it represents a 
>> library
>> set in the kernel). The kconfig convention is to use select for such
>> symbols so they are turned on implicitly when the user enables a kconfig
>> that needs them.
>>
>> Otherwise the XEN_GNTDEV_DMABUF kconfig is overly difficult to enable.
>>
>> Fixes: 932d6562179e ("xen/gntdev: Add initial support for dma-buf UAPI")
>> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: xen-devel@lists.xenproject.org
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
>
> Reviewed-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>
> Juergen

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (14 preceding siblings ...)
  2019-10-15 18:12 ` [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related Jason Gunthorpe
@ 2019-10-16  8:58 ` Christian König
  2019-10-16 16:04   ` Jason Gunthorpe
  2019-10-21 15:55 ` Dennis Dalessandro
  2019-10-21 18:40 ` Jerome Glisse
  17 siblings, 1 reply; 58+ messages in thread
From: Christian König @ 2019-10-16  8:58 UTC (permalink / raw)
  To: Jason Gunthorpe, Jerome Glisse, Ralph Campbell, John Hubbard,
	Felix.Kuehling
  Cc: Andrea Arcangeli, linux-rdma, amd-gfx, linux-mm, Jason Gunthorpe,
	dri-devel, Ben Skeggs

Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
> From: Jason Gunthorpe <jgg@mellanox.com>
>
> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> they only use invalidate_range_start/end and immediately check the
> invalidating range against some driver data structure to tell if the
> driver is interested. Half of them use an interval_tree, the others are
> simple linear search lists.
>
> Of the ones I checked they largely seem to have various kinds of races,
> bugs and poor implementation. This is a result of the complexity in how
> the notifier interacts with get_user_pages(). It is extremely difficult to
> use it correctly.
>
> Consolidate all of this code together into the core mmu_notifier and
> provide a locking scheme similar to hmm_mirror that allows the user to
> safely use get_user_pages() and reliably know if the page list still
> matches the mm.

That sounds really good, but could you outline for a moment how that is 
archived?

Please keep in mind that the page reference get_user_pages() grabs is 
*NOT* sufficient to guarantee coherency here.

Regards,
Christian.

>
> This new arrangment plays nicely with the !blockable mode for
> OOM. Scanning the interval tree is done such that the intersection test
> will always succeed, and since there is no invalidate_range_end exposed to
> drivers the scheme safely allows multiple drivers to be subscribed.
>
> Four places are converted as an example of how the new API is used.
> Four are left for future patches:
>   - i915_gem has complex locking around destruction of a registration,
>     needs more study
>   - hfi1 (2nd user) needs access to the rbtree
>   - scif_dma has a complicated logic flow
>   - vhost's mmu notifiers are already being rewritten
>
> This is still being tested, but I figured to send it to start getting help
> from the xen, amd and hfi drivers which I cannot test here.
>
> It would be intended for the hmm tree.
>
> Jason Gunthorpe (15):
>    mm/mmu_notifier: define the header pre-processor parts even if
>      disabled
>    mm/mmu_notifier: add an interval tree notifier
>    mm/hmm: allow hmm_range to be used with a mmu_range_notifier or
>      hmm_mirror
>    mm/hmm: define the pre-processor related parts of hmm.h even if
>      disabled
>    RDMA/odp: Use mmu_range_notifier_insert()
>    RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv
>    drm/radeon: use mmu_range_notifier_insert
>    xen/gntdev: Use select for DMA_SHARED_BUFFER
>    xen/gntdev: use mmu_range_notifier_insert
>    nouveau: use mmu_notifier directly for invalidate_range_start
>    nouveau: use mmu_range_notifier instead of hmm_mirror
>    drm/amdgpu: Call find_vma under mmap_sem
>    drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
>    drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
>    mm/hmm: remove hmm_mirror and related
>
>   Documentation/vm/hmm.rst                      | 105 +---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |   2 +
>   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   9 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  14 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |   1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        | 445 ++------------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |  53 --
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |  13 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 111 ++--
>   drivers/gpu/drm/nouveau/nouveau_svm.c         | 229 +++++---
>   drivers/gpu/drm/radeon/radeon.h               |   9 +-
>   drivers/gpu/drm/radeon/radeon_mn.c            | 218 ++-----
>   drivers/infiniband/core/device.c              |   1 -
>   drivers/infiniband/core/umem_odp.c            | 288 +---------
>   drivers/infiniband/hw/hfi1/file_ops.c         |   2 +-
>   drivers/infiniband/hw/hfi1/hfi.h              |   2 +-
>   drivers/infiniband/hw/hfi1/user_exp_rcv.c     | 144 ++---
>   drivers/infiniband/hw/hfi1/user_exp_rcv.h     |   3 +-
>   drivers/infiniband/hw/mlx5/mlx5_ib.h          |   7 +-
>   drivers/infiniband/hw/mlx5/mr.c               |   3 +-
>   drivers/infiniband/hw/mlx5/odp.c              |  48 +-
>   drivers/xen/Kconfig                           |   3 +-
>   drivers/xen/gntdev-common.h                   |   8 +-
>   drivers/xen/gntdev.c                          | 179 ++----
>   include/linux/hmm.h                           | 195 +------
>   include/linux/mmu_notifier.h                  | 124 +++-
>   include/rdma/ib_umem_odp.h                    |  65 +--
>   include/rdma/ib_verbs.h                       |   2 -
>   kernel/fork.c                                 |   1 -
>   mm/Kconfig                                    |   2 +-
>   mm/hmm.c                                      | 275 +--------
>   mm/mmu_notifier.c                             | 542 +++++++++++++++++-
>   32 files changed, 1180 insertions(+), 1923 deletions(-)
>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-16  8:58 ` [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Christian König
@ 2019-10-16 16:04   ` Jason Gunthorpe
  2019-10-17  8:54     ` Christian König
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-16 16:04 UTC (permalink / raw)
  To: christian.koenig
  Cc: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling,
	Andrea Arcangeli, linux-rdma, amd-gfx, linux-mm, dri-devel,
	Ben Skeggs

On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
> > From: Jason Gunthorpe <jgg@mellanox.com>
> > 
> > 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> > scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> > they only use invalidate_range_start/end and immediately check the
> > invalidating range against some driver data structure to tell if the
> > driver is interested. Half of them use an interval_tree, the others are
> > simple linear search lists.
> > 
> > Of the ones I checked they largely seem to have various kinds of races,
> > bugs and poor implementation. This is a result of the complexity in how
> > the notifier interacts with get_user_pages(). It is extremely difficult to
> > use it correctly.
> > 
> > Consolidate all of this code together into the core mmu_notifier and
> > provide a locking scheme similar to hmm_mirror that allows the user to
> > safely use get_user_pages() and reliably know if the page list still
> > matches the mm.
> 
> That sounds really good, but could you outline for a moment how that is
> archived?

It uses the same basic scheme as hmm and rdma odp, outlined in the
revisions to hmm.rst later on.

Basically, 

 seq = mmu_range_read_begin(&mrn);

 // This is a speculative region
 .. get_user_pages()/hmm_range_fault() ..
 // Result cannot be derferenced

 take_lock(driver->update);
 if (mmu_range_read_retry(&mrn, range.notifier_seq) {
    // collision! The results are not correct
    goto again
 }

 // no collision, and now under lock. Now we can de-reference the pages/etc
 // program HW
 // Now the invalidate callback is responsible to synchronize against changes
 unlock(driver->update) 

Basically, anything that was using hmm_mirror correctly transisions
over fairly trivially, just with the modification to store a sequence
number to close that race described in the hmm commit.

For something like AMD gpu I expect it to transition to use dma_fence
from the notifier for coherency right before it unlocks driver->update.

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-16 16:04   ` Jason Gunthorpe
@ 2019-10-17  8:54     ` Christian König
  2019-10-17 16:26       ` Yang, Philip
  0 siblings, 1 reply; 58+ messages in thread
From: Christian König @ 2019-10-17  8:54 UTC (permalink / raw)
  To: Jason Gunthorpe, christian.koenig
  Cc: Andrea Arcangeli, Ralph Campbell, linux-rdma, John Hubbard,
	Felix.Kuehling, amd-gfx, linux-mm, Jerome Glisse, dri-devel,
	Ben Skeggs

Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
> On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
>> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
>>> From: Jason Gunthorpe <jgg@mellanox.com>
>>>
>>> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
>>> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
>>> they only use invalidate_range_start/end and immediately check the
>>> invalidating range against some driver data structure to tell if the
>>> driver is interested. Half of them use an interval_tree, the others are
>>> simple linear search lists.
>>>
>>> Of the ones I checked they largely seem to have various kinds of races,
>>> bugs and poor implementation. This is a result of the complexity in how
>>> the notifier interacts with get_user_pages(). It is extremely difficult to
>>> use it correctly.
>>>
>>> Consolidate all of this code together into the core mmu_notifier and
>>> provide a locking scheme similar to hmm_mirror that allows the user to
>>> safely use get_user_pages() and reliably know if the page list still
>>> matches the mm.
>> That sounds really good, but could you outline for a moment how that is
>> archived?
> It uses the same basic scheme as hmm and rdma odp, outlined in the
> revisions to hmm.rst later on.
>
> Basically,
>
>   seq = mmu_range_read_begin(&mrn);
>
>   // This is a speculative region
>   .. get_user_pages()/hmm_range_fault() ..

How do we enforce that this get_user_pages()/hmm_range_fault() doesn't 
see outdated page table information?

In other words how the the following race prevented:

CPU A CPU B
invalidate_range_start()
       mmu_range_read_begin()
       get_user_pages()/hmm_range_fault()
Updating the ptes
invalidate_range_end()


I mean get_user_pages() tries to circumvent this issue by grabbing a 
reference to the pages in question, but that isn't sufficient for the 
SVM use case.

That's the reason why we had this horrible solution with a r/w lock and 
a linked list of BOs in an interval tree.

Regards,
Christian.

>   // Result cannot be derferenced
>
>   take_lock(driver->update);
>   if (mmu_range_read_retry(&mrn, range.notifier_seq) {
>      // collision! The results are not correct
>      goto again
>   }
>
>   // no collision, and now under lock. Now we can de-reference the pages/etc
>   // program HW
>   // Now the invalidate callback is responsible to synchronize against changes
>   unlock(driver->update)
>
> Basically, anything that was using hmm_mirror correctly transisions
> over fairly trivially, just with the modification to store a sequence
> number to close that race described in the hmm commit.
>
> For something like AMD gpu I expect it to transition to use dma_fence
> from the notifier for coherency right before it unlocks driver->update.
>
> Jason
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-17  8:54     ` Christian König
@ 2019-10-17 16:26       ` Yang, Philip
  2019-10-17 16:47         ` Koenig, Christian
  0 siblings, 1 reply; 58+ messages in thread
From: Yang, Philip @ 2019-10-17 16:26 UTC (permalink / raw)
  To: Koenig, Christian, Jason Gunthorpe
  Cc: Andrea Arcangeli, Ralph Campbell, linux-rdma, John Hubbard,
	Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse, dri-devel,
	Ben Skeggs



On 2019-10-17 4:54 a.m., Christian König wrote:
> Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
>> On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
>>> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
>>>> From: Jason Gunthorpe <jgg@mellanox.com>
>>>>
>>>> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, 
>>>> hfi1,
>>>> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
>>>> they only use invalidate_range_start/end and immediately check the
>>>> invalidating range against some driver data structure to tell if the
>>>> driver is interested. Half of them use an interval_tree, the others are
>>>> simple linear search lists.
>>>>
>>>> Of the ones I checked they largely seem to have various kinds of races,
>>>> bugs and poor implementation. This is a result of the complexity in how
>>>> the notifier interacts with get_user_pages(). It is extremely 
>>>> difficult to
>>>> use it correctly.
>>>>
>>>> Consolidate all of this code together into the core mmu_notifier and
>>>> provide a locking scheme similar to hmm_mirror that allows the user to
>>>> safely use get_user_pages() and reliably know if the page list still
>>>> matches the mm.
>>> That sounds really good, but could you outline for a moment how that is
>>> archived?
>> It uses the same basic scheme as hmm and rdma odp, outlined in the
>> revisions to hmm.rst later on.
>>
>> Basically,
>>
>>   seq = mmu_range_read_begin(&mrn);
>>
>>   // This is a speculative region
>>   .. get_user_pages()/hmm_range_fault() ..
> 
> How do we enforce that this get_user_pages()/hmm_range_fault() doesn't 
> see outdated page table information?
> 
> In other words how the the following race prevented:
> 
> CPU A CPU B
> invalidate_range_start()
>        mmu_range_read_begin()
>        get_user_pages()/hmm_range_fault()
> Updating the ptes
> invalidate_range_end()
> 
> 
> I mean get_user_pages() tries to circumvent this issue by grabbing a 
> reference to the pages in question, but that isn't sufficient for the 
> SVM use case.
> 
> That's the reason why we had this horrible solution with a r/w lock and 
> a linked list of BOs in an interval tree.
> 
> Regards,
> Christian.
get_user_pages/hmm_range_fault() and invalidate_range_start() both are 
called while holding mm->map_sem, so they are always serialized.

Philip
> 
>>   // Result cannot be derferenced
>>
>>   take_lock(driver->update);
>>   if (mmu_range_read_retry(&mrn, range.notifier_seq) {
>>      // collision! The results are not correct
>>      goto again
>>   }
>>
>>   // no collision, and now under lock. Now we can de-reference the 
>> pages/etc
>>   // program HW
>>   // Now the invalidate callback is responsible to synchronize against 
>> changes
>>   unlock(driver->update)
>>
>> Basically, anything that was using hmm_mirror correctly transisions
>> over fairly trivially, just with the modification to store a sequence
>> number to close that race described in the hmm commit.
>>
>> For something like AMD gpu I expect it to transition to use dma_fence
>> from the notifier for coherency right before it unlocks driver->update.
>>
>> Jason
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-17 16:26       ` Yang, Philip
@ 2019-10-17 16:47         ` Koenig, Christian
  2019-10-18 20:36           ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Koenig, Christian @ 2019-10-17 16:47 UTC (permalink / raw)
  To: Yang, Philip, Jason Gunthorpe
  Cc: Andrea Arcangeli, Ralph Campbell, linux-rdma, John Hubbard,
	Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse, dri-devel,
	Ben Skeggs

Sending once more as text.

Am 17.10.19 um 18:26 schrieb Yang, Philip:
> On 2019-10-17 4:54 a.m., Christian König wrote:
>> Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
>>> On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
>>>> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
>>>>> From: Jason Gunthorpe <jgg@mellanox.com>
>>>>>
>>>>> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp,
>>>>> hfi1,
>>>>> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
>>>>> they only use invalidate_range_start/end and immediately check the
>>>>> invalidating range against some driver data structure to tell if the
>>>>> driver is interested. Half of them use an interval_tree, the others are
>>>>> simple linear search lists.
>>>>>
>>>>> Of the ones I checked they largely seem to have various kinds of races,
>>>>> bugs and poor implementation. This is a result of the complexity in how
>>>>> the notifier interacts with get_user_pages(). It is extremely
>>>>> difficult to
>>>>> use it correctly.
>>>>>
>>>>> Consolidate all of this code together into the core mmu_notifier and
>>>>> provide a locking scheme similar to hmm_mirror that allows the user to
>>>>> safely use get_user_pages() and reliably know if the page list still
>>>>> matches the mm.
>>>> That sounds really good, but could you outline for a moment how that is
>>>> archived?
>>> It uses the same basic scheme as hmm and rdma odp, outlined in the
>>> revisions to hmm.rst later on.
>>>
>>> Basically,
>>>
>>>    seq = mmu_range_read_begin(&mrn);
>>>
>>>    // This is a speculative region
>>>    .. get_user_pages()/hmm_range_fault() ..
>> How do we enforce that this get_user_pages()/hmm_range_fault() doesn't
>> see outdated page table information?
>>
>> In other words how the the following race prevented:
>>
>> CPU A CPU B
>> invalidate_range_start()
>>         mmu_range_read_begin()
>>         get_user_pages()/hmm_range_fault()
>> Updating the ptes
>> invalidate_range_end()
>>
>>
>> I mean get_user_pages() tries to circumvent this issue by grabbing a
>> reference to the pages in question, but that isn't sufficient for the
>> SVM use case.
>>
>> That's the reason why we had this horrible solution with a r/w lock and
>> a linked list of BOs in an interval tree.
>>
>> Regards,
>> Christian.
> get_user_pages/hmm_range_fault() and invalidate_range_start() both are
> called while holding mm->map_sem, so they are always serialized.

Not even remotely.

For calling get_user_pages()/hmm_range_fault() you only need to hold the 
mmap_sem in read mode.

And IIRC invalidate_range_start() is sometimes called without holding 
the mmap_sem at all.

So again how are they serialized?

Regards,
Christian.

>
> Philip
>>>    // Result cannot be derferenced
>>>
>>>    take_lock(driver->update);
>>>    if (mmu_range_read_retry(&mrn, range.notifier_seq) {
>>>       // collision! The results are not correct
>>>       goto again
>>>    }
>>>
>>>    // no collision, and now under lock. Now we can de-reference the
>>> pages/etc
>>>    // program HW
>>>    // Now the invalidate callback is responsible to synchronize against
>>> changes
>>>    unlock(driver->update)
>>>
>>> Basically, anything that was using hmm_mirror correctly transisions
>>> over fairly trivially, just with the modification to store a sequence
>>> number to close that race described in the hmm commit.
>>>
>>> For something like AMD gpu I expect it to transition to use dma_fence
>>> from the notifier for coherency right before it unlocks driver->update.
>>>
>>> Jason
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-17 16:47         ` Koenig, Christian
@ 2019-10-18 20:36           ` Jason Gunthorpe
  2019-10-20 14:21             ` Koenig, Christian
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-18 20:36 UTC (permalink / raw)
  To: Koenig, Christian
  Cc: Yang, Philip, Andrea Arcangeli, Ralph Campbell, linux-rdma,
	John Hubbard, Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse,
	dri-devel, Ben Skeggs

On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:

> > get_user_pages/hmm_range_fault() and invalidate_range_start() both are
> > called while holding mm->map_sem, so they are always serialized.
> 
> Not even remotely.
> 
> For calling get_user_pages()/hmm_range_fault() you only need to hold the 
> mmap_sem in read mode.

Right
 
> And IIRC invalidate_range_start() is sometimes called without holding 
> the mmap_sem at all.

Yep
 
> So again how are they serialized?

The 'driver lock' thing does it, read the hmm documentation, the hmm
approach is basically the only approach that was correct of all the
drivers..

So long as the 'driver lock' is held the range cannot become
invalidated as the 'driver lock' prevents progress of invalidation.

Holding the driver lock and using the seq based mmu_range_read_retry()
tells if the previous unlocked get_user_pages() is still valid or
needs to be discard.

So it doesn't matter if get_user_pages() races or not, the result is not
to be used until the driver lock is held and mmu_range_read_retry()
called, which provides the coherence.

It is the usual seqlock pattern.

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-18 20:36           ` Jason Gunthorpe
@ 2019-10-20 14:21             ` Koenig, Christian
  2019-10-21 13:57               ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Koenig, Christian @ 2019-10-20 14:21 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Yang, Philip, Andrea Arcangeli, Ralph Campbell, linux-rdma,
	John Hubbard, Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse,
	dri-devel, Ben Skeggs

Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
>
>>> get_user_pages/hmm_range_fault() and invalidate_range_start() both are
>>> called while holding mm->map_sem, so they are always serialized.
>> Not even remotely.
>>
>> For calling get_user_pages()/hmm_range_fault() you only need to hold the
>> mmap_sem in read mode.
> Right
>   
>> And IIRC invalidate_range_start() is sometimes called without holding
>> the mmap_sem at all.
> Yep
>   
>> So again how are they serialized?
> The 'driver lock' thing does it, read the hmm documentation, the hmm
> approach is basically the only approach that was correct of all the
> drivers..

Well that's what I've did, but what HMM does still doesn't looks correct 
to me.

> So long as the 'driver lock' is held the range cannot become
> invalidated as the 'driver lock' prevents progress of invalidation.

Correct, but the problem is it doesn't wait for ongoing operations to 
complete.

See I'm talking about the following case:

Thread A    Thread B
invalidate_range_start()
                     mmu_range_read_begin()
                     get_user_pages()/hmm_range_fault()
                     grab_driver_lock()
Updating the ptes
invalidate_range_end()

As far as I can see in invalidate_range_start() the driver lock is taken 
to make sure that we can't start any invalidation while the driver is 
using the pages for a command submission.

But the pages we got from get_user_pages()/hmm_range_fault() might not 
be up to date because the driver lock is also dropped again in 
invalidate_range_start() and not in invalidate_range_end().

> Holding the driver lock and using the seq based mmu_range_read_retry()
> tells if the previous unlocked get_user_pages() is still valid or
> needs to be discard.
>
> So it doesn't matter if get_user_pages() races or not, the result is not
> to be used until the driver lock is held and mmu_range_read_retry()
> called, which provides the coherence.
>
> It is the usual seqlock pattern.

Well we don't update the seqlock after the update to the protected data 
structure (the page table) happened, but rather before that.

That doesn't looks like the normal patter for a seqlock to me and as far 
as I can see that is quite a bug in the HMM design/logic.

Regards,
Christian.

>
> Jason


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-20 14:21             ` Koenig, Christian
@ 2019-10-21 13:57               ` Jason Gunthorpe
  2019-10-21 14:28                 ` Koenig, Christian
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 13:57 UTC (permalink / raw)
  To: Koenig, Christian
  Cc: Yang, Philip, Andrea Arcangeli, Ralph Campbell, linux-rdma,
	John Hubbard, Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse,
	dri-devel, Ben Skeggs

On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote:
> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> > On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
> >
> >>> get_user_pages/hmm_range_fault() and invalidate_range_start() both are
> >>> called while holding mm->map_sem, so they are always serialized.
> >> Not even remotely.
> >>
> >> For calling get_user_pages()/hmm_range_fault() you only need to hold the
> >> mmap_sem in read mode.
> > Right
> >   
> >> And IIRC invalidate_range_start() is sometimes called without holding
> >> the mmap_sem at all.
> > Yep
> >   
> >> So again how are they serialized?
> > The 'driver lock' thing does it, read the hmm documentation, the hmm
> > approach is basically the only approach that was correct of all the
> > drivers..
> 
> Well that's what I've did, but what HMM does still doesn't looks correct 
> to me.

It has a bug, but the basic flow seems to work.

https://patchwork.kernel.org/patch/11191

> > So long as the 'driver lock' is held the range cannot become
> > invalidated as the 'driver lock' prevents progress of invalidation.
> 
> Correct, but the problem is it doesn't wait for ongoing operations to 
> complete.
>
> See I'm talking about the following case:
> 
> Thread A    Thread B
> invalidate_range_start()
>                      mmu_range_read_begin()
>                      get_user_pages()/hmm_range_fault()
>                      grab_driver_lock()
> Updating the ptes
> invalidate_range_end()
> 
> As far as I can see in invalidate_range_start() the driver lock is taken 
> to make sure that we can't start any invalidation while the driver is 
> using the pages for a command submission.

Again, this uses the seqlock like scheme *and* the driver lock.

In this case after grab_driver_lock() mmu_range_read_retry() will
return false if Thread A has progressed to 'updating the ptes.

For instance here is how the concurrency resolves for retry:

       CPU1                                CPU2
                                  seq = mmu_range_read_begin()
invalidate_range_start()
  invalidate_seq++ 
                                  get_user_pages()/hmm_range_fault()
  grab_driver_lock()
  ungrab_driver_lock()
                                  grab_driver_lock()
                                  seq != invalidate_seq, so retry

  Updating the ptes

invalidate_range_end()
  invalidate_seq++


And here is how it resolves for blocking:

       CPU1                                CPU2
                                  seq = mmu_range_read_begin()
invalidate_range_start()
                                  get_user_pages()/hmm_range_fault()

                                  grab_driver_lock()
                                  seq == invalidate_seq, so conitnue
  invalidate_seq++ 
                                  ungrab_driver_lock()
  grab_driver_lock()
  // Cannot progress to 'Updating the ptes' while the drive_lock is held
  ungrab_driver_lock()
  Updating the ptes

invalidate_range_end()
  invalidate_seq++

For the above I've simplified the mechanics of the invalidate_seq, you
need to look through the patch to see how it actually works.

> Well we don't update the seqlock after the update to the protected data 
> structure (the page table) happened, but rather before that.

??? This is what mn_itree_inv_end() does, it is called by
invalidate_range_end

> That doesn't looks like the normal patter for a seqlock to me and as far 
> as I can see that is quite a bug in the HMM design/logic.

Well, hmm has a bug because it doesn't use a seqlock pattern, see the
above URL.

One of the motivations for this work is to squash that bug by adding a
seqlock like pattern. But the basic hmm flow and collision-retry
approach seems sound.

Do you see a problem with this patch?

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 13:57               ` Jason Gunthorpe
@ 2019-10-21 14:28                 ` Koenig, Christian
  2019-10-21 15:12                   ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Koenig, Christian @ 2019-10-21 14:28 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Yang, Philip, Andrea Arcangeli, Ralph Campbell, linux-rdma,
	John Hubbard, Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse,
	dri-devel, Ben Skeggs

Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote:
>> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
>>> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
>>> [SNIP]
>>>    
>>>> So again how are they serialized?
>>> The 'driver lock' thing does it, read the hmm documentation, the hmm
>>> approach is basically the only approach that was correct of all the
>>> drivers..
>> Well that's what I've did, but what HMM does still doesn't looks correct
>> to me.
> It has a bug, but the basic flow seems to work.
>
> https://patchwork.kernel.org/patch/11191

Maybe wrong link? That link looks like an unrelated discussion on kernel 
image relocation.

>>> So long as the 'driver lock' is held the range cannot become
>>> invalidated as the 'driver lock' prevents progress of invalidation.
>> Correct, but the problem is it doesn't wait for ongoing operations to
>> complete.
>>
>> See I'm talking about the following case:
>>
>> Thread A    Thread B
>> invalidate_range_start()
>>                       mmu_range_read_begin()
>>                       get_user_pages()/hmm_range_fault()
>>                       grab_driver_lock()
>> Updating the ptes
>> invalidate_range_end()
>>
>> As far as I can see in invalidate_range_start() the driver lock is taken
>> to make sure that we can't start any invalidation while the driver is
>> using the pages for a command submission.
> Again, this uses the seqlock like scheme *and* the driver lock.
>
> In this case after grab_driver_lock() mmu_range_read_retry() will
> return false if Thread A has progressed to 'updating the ptes.
>
> For instance here is how the concurrency resolves for retry:
>
>         CPU1                                CPU2
>                                    seq = mmu_range_read_begin()
> invalidate_range_start()
>    invalidate_seq++

How that was order was what confusing me. But I've read up on the code 
in mmu_range_read_begin() and found the lines I was looking for:

+    if (is_invalidating)
+        wait_event(mmn_mm->wq,
+               READ_ONCE(mmn_mm->invalidate_seq) != seq);

[SNIP]

> For the above I've simplified the mechanics of the invalidate_seq, you
> need to look through the patch to see how it actually works.

Yea, that you also allow multiple write sides is pretty neat.

>> Well we don't update the seqlock after the update to the protected data
>> structure (the page table) happened, but rather before that.
> ??? This is what mn_itree_inv_end() does, it is called by
> invalidate_range_end
>
>> That doesn't looks like the normal patter for a seqlock to me and as far
>> as I can see that is quite a bug in the HMM design/logic.
> Well, hmm has a bug because it doesn't use a seqlock pattern, see the
> above URL.
>
> One of the motivations for this work is to squash that bug by adding a
> seqlock like pattern. But the basic hmm flow and collision-retry
> approach seems sound.
>
> Do you see a problem with this patch?

No, not any more.

Essentially you are doing the same thing I've tried to do with the 
original amdgpu implementation. The difference is that you don't try to 
use a per range sequence (which is a good idea, we never got that fully 
working) and you allow multiple writers at the same time.

Feel free to stitch an Acked-by: Christian König 
<christian.koenig@amd.com> on patch #2, but you still doing a bunch of 
things in there which are way beyond my understanding (e.g. where are 
all the SMP barriers?).

Cheers,
Christian.

>
> Jason


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 14:28                 ` Koenig, Christian
@ 2019-10-21 15:12                   ` Jason Gunthorpe
  2019-10-22  7:57                     ` Daniel Vetter
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 15:12 UTC (permalink / raw)
  To: Koenig, Christian
  Cc: Yang, Philip, Andrea Arcangeli, Ralph Campbell, linux-rdma,
	John Hubbard, Kuehling, Felix, amd-gfx, linux-mm, Jerome Glisse,
	dri-devel, Ben Skeggs

On Mon, Oct 21, 2019 at 02:28:46PM +0000, Koenig, Christian wrote:
> Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> > On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote:
> >> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> >>> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
> >>> [SNIP]
> >>>    
> >>>> So again how are they serialized?
> >>> The 'driver lock' thing does it, read the hmm documentation, the hmm
> >>> approach is basically the only approach that was correct of all the
> >>> drivers..
> >> Well that's what I've did, but what HMM does still doesn't looks correct
> >> to me.
> > It has a bug, but the basic flow seems to work.
> >
> > https://patchwork.kernel.org/patch/11191
> 
> Maybe wrong link? That link looks like an unrelated discussion on kernel 
> image relocation.

Sorry, it got corrupted:

https://patchwork.kernel.org/patch/11191397/

> >>> So long as the 'driver lock' is held the range cannot become
> >>> invalidated as the 'driver lock' prevents progress of invalidation.
> >> Correct, but the problem is it doesn't wait for ongoing operations to
> >> complete.
> >>
> >> See I'm talking about the following case:
> >>
> >> Thread A    Thread B
> >> invalidate_range_start()
> >>                       mmu_range_read_begin()
> >>                       get_user_pages()/hmm_range_fault()
> >>                       grab_driver_lock()
> >> Updating the ptes
> >> invalidate_range_end()
> >>
> >> As far as I can see in invalidate_range_start() the driver lock is taken
> >> to make sure that we can't start any invalidation while the driver is
> >> using the pages for a command submission.
> > Again, this uses the seqlock like scheme *and* the driver lock.
> >
> > In this case after grab_driver_lock() mmu_range_read_retry() will
> > return false if Thread A has progressed to 'updating the ptes.
> >
> > For instance here is how the concurrency resolves for retry:
> >
> >         CPU1                                CPU2
> >                                    seq = mmu_range_read_begin()
> > invalidate_range_start()
> >    invalidate_seq++
> 
> How that was order was what confusing me. But I've read up on the code 
> in mmu_range_read_begin() and found the lines I was looking for:
> 
> +    if (is_invalidating)
> +        wait_event(mmn_mm->wq,
> +               READ_ONCE(mmn_mm->invalidate_seq) != seq);
> 
> [SNIP]

Right, the basic design is that the 'seq' returned by
mmu_range_read_begin() is never currently under invalidation.

Thus if the starting seq is not invalidating, then if it doesn't
change we are guaranteed the ptes haven't changed either.

> > For the above I've simplified the mechanics of the invalidate_seq, you
> > need to look through the patch to see how it actually works.
> 
> Yea, that you also allow multiple write sides is pretty neat.

Complicated, but necessary to make the non-blocking OOM stuff able to
read the interval tree under all conditions :\

> > One of the motivations for this work is to squash that bug by adding a
> > seqlock like pattern. But the basic hmm flow and collision-retry
> > approach seems sound.
> >
> > Do you see a problem with this patch?
> 
> No, not any more.

Okay, great, thanks
 
> Essentially you are doing the same thing I've tried to do with the 
> original amdgpu implementation. The difference is that you don't try to 
> use a per range sequence (which is a good idea, we never got that fully 
> working) and you allow multiple writers at the same time.

Yes, ODP had the per-range sequence and it never worked right
either. Keeping track of things during the invalidate_end was too complex
 
> Feel free to stitch an Acked-by: Christian König 
> <christian.koenig@amd.com> on patch #2,

Thanks! Can you also take some review and test for the AMD related
patches? These were quite hard to make, it is very likely I've made an
error..

> but you still doing a bunch of things in there which are way beyond
> my understanding (e.g. where are all the SMP barriers?).

The only non-locked data is 'struct mmu_range_notifier->invalidate_seq'

On the write side it uses a WRITE_ONCE. The 'seq' it writes is
generated under the mmn_mm->lock spinlock (held before and after the
WRITE_ONCE) such that all concurrent WRITE_ONCE's are storing the same
value. 

Essentially the spinlock is providing the barrier to order write:

invalidate_range_start():
 spin_lock(&mmn_mm->lock);
 mmn_mm->active_invalidate_ranges++;
 mmn_mm->invalidate_seq |= 1;
 *seq = mmn_mm->invalidate_seq;
 spin_unlock(&mmn_mm->lock);

 WRITE_ONCE(mrn->invalidate_seq, cur_seq);

invalidate_range_end()
 spin_lock(&mmn_mm->lock);
 if (--mmn_mm->active_invalidate_ranges)
    mmn_mm->invalidate_seq++
 spin_unlock(&mmn_mm->lock);

ie when we do invalidate_seq++, due to the active_invalidate_ranges
counter and the spinlock, we know all other WRITE_ONCE's have passed
their spin_unlock and no concurrent ones are starting. The spinlock is
providing the barrier here.

On the read side.. It is a similar argument, except with the
driver_lock:

mmu_range_read_begin()
  seq = READ_ONCE(mrn->invalidate_seq);

Here 'seq' may be the current value, or it may be an older
value. Ordering is eventually provided by the driver_lock:

mn_tree_invalidate_start()
 [..]
 WRITE_ONCE(mrn->invalidate_seq, cur_seq);
 driver_lock()
 driver_unlock()

vs
 driver_lock()
   mmu_range_read_begin()
     return seq != READ_ONCE(mrn->invalidate_seq);
 driver_unlock()

Here if mn_tree_invalidate_start() has passed driver_unlock() then
because we do driver_lock() before mmu_range_read_begin() we must
observe the WRITE_ONCE. ie the driver_unlock()/driver_lock() provide
the pair'd barrier.

If mn_tree_invalidate_start() has not passed driver_lock() then the
mmu_range_read_begin() side prevents it from passing driver_lock()
while it holds the lock. In this case it is OK if we don't observe the
WRITE_ONCE() that was done just before as the invalidate_start()
thread can't proceed to invalidation.

It is very unusual locking, it would be great if others could help
look at it!

The unusual bit in all of this is using a lock's critical region to
'protect' data for read, but updating that same data before the lock's
critical secion. ie relying on the unlock barrier to 'release' program
ordered stores done before the lock's own critical region, and the
lock side barrier to 'acquire' those stores.

This approach is borrowed from the hmm mirror implementation..

If for some reason the scheme doesn't work, then the fallback is to
expand the mmn_mm->lock spinlock to protect the mrn->invalidate_seq at
some cost in performance.

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (15 preceding siblings ...)
  2019-10-16  8:58 ` [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Christian König
@ 2019-10-21 15:55 ` Dennis Dalessandro
  2019-10-21 16:58   ` Jason Gunthorpe
  2019-10-21 18:40 ` Jerome Glisse
  17 siblings, 1 reply; 58+ messages in thread
From: Dennis Dalessandro @ 2019-10-21 15:55 UTC (permalink / raw)
  To: Jason Gunthorpe, Jerome Glisse, Ralph Campbell, John Hubbard,
	Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe

On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
> This is still being tested, but I figured to send it to start getting help
> from the xen, amd and hfi drivers which I cannot test here.

Sorry for the delay, I never seen this. Was not on Cc list and didn't 
register to me it impacted hfi. I'll take a look and run it through some 
hfi1 tests.

-Denny

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 15:55 ` Dennis Dalessandro
@ 2019-10-21 16:58   ` Jason Gunthorpe
  2019-10-22 11:56     ` Dennis Dalessandro
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 16:58 UTC (permalink / raw)
  To: Dennis Dalessandro
  Cc: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling,
	linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs

On Mon, Oct 21, 2019 at 11:55:51AM -0400, Dennis Dalessandro wrote:
> On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
> > This is still being tested, but I figured to send it to start getting help
> > from the xen, amd and hfi drivers which I cannot test here.
> 
> Sorry for the delay, I never seen this. Was not on Cc list and didn't
> register to me it impacted hfi. I'll take a look and run it through some
> hfi1 tests.

Hm, you were cc'd on the hfi1 patch of the series:

https://patchwork.kernel.org/patch/11191395/

So you saw that, right?

But it seems that git send-email didn't pull all the cc's together?

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
@ 2019-10-21 18:30   ` Jerome Glisse
  2019-10-21 18:54     ` Jason Gunthorpe
  2019-10-27 23:15   ` Jason Gunthorpe
  1 sibling, 1 reply; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 18:30 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Jason Gunthorpe, Michal Hocko

On Tue, Oct 15, 2019 at 03:12:29PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> Of the 13 users of mmu_notifiers, 8 of them use only
> invalidate_range_start/end() and immediately intersect the
> mmu_notifier_range with some kind of internal list of VAs.  4 use an
> interval tree (i915_gem, radeon_mn, umem_odp, hfi1). 4 use a linked list
> of some kind (scif_dma, vhost, gntdev, hmm)
> 
> And the remaining 5 either don't use invalidate_range_start() or do some
> special thing with it.
> 
> It turns out that building a correct scheme with an interval tree is
> pretty complicated, particularly if the use case is synchronizing against
> another thread doing get_user_pages().  Many of these implementations have
> various subtle and difficult to fix races.
> 
> This approach puts the interval tree as common code at the top of the mmu
> notifier call tree and implements a shareable locking scheme.
> 
> It includes:
>  - An interval tree tracking VA ranges, with per-range callbacks
>  - A read/write locking scheme for the interval tree that avoids
>    sleeping in the notifier path (for OOM killer)
>  - A sequence counter based collision-retry locking scheme to tell
>    device page fault that a VA range is being concurrently invalidated.
> 
> This is based on various ideas:
> - hmm accumulates invalidated VA ranges and releases them when all
>   invalidates are done, via active_invalidate_ranges count.
>   This approach avoids having to intersect the interval tree twice (as
>   umem_odp does) at the potential cost of a longer device page fault.
> 
> - kvm/umem_odp use a sequence counter to drive the collision retry,
>   via invalidate_seq
> 
> - a deferred work todo list on unlock scheme like RTNL, via deferred_list.
>   This makes adding/removing interval tree members more deterministic
> 
> - seqlock, except this version makes the seqlock idea multi-holder on the
>   write side by protecting it with active_invalidate_ranges and a spinlock
> 
> To minimize MM overhead when only the interval tree is being used, the
> entire SRCU and hlist overheads are dropped using some simple
> branches. Similarly the interval tree overhead is dropped when in hlist
> mode.
> 
> The overhead from the mandatory spinlock is broadly the same as most of
> existing users which already had a lock (or two) of some sort on the
> invalidation path.
> 
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
> ---
>  include/linux/mmu_notifier.h |  78 ++++++
>  mm/Kconfig                   |   1 +
>  mm/mmu_notifier.c            | 529 +++++++++++++++++++++++++++++++++--
>  3 files changed, 583 insertions(+), 25 deletions(-)
> 
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index 12bd603d318ce7..bc2b12483de127 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -6,10 +6,12 @@
>  #include <linux/spinlock.h>
>  #include <linux/mm_types.h>
>  #include <linux/srcu.h>
> +#include <linux/interval_tree.h>
>  
>  struct mmu_notifier_mm;
>  struct mmu_notifier;
>  struct mmu_notifier_range;
> +struct mmu_range_notifier;
>  
>  /**
>   * enum mmu_notifier_event - reason for the mmu notifier callback
> @@ -32,6 +34,9 @@ struct mmu_notifier_range;
>   * access flags). User should soft dirty the page in the end callback to make
>   * sure that anyone relying on soft dirtyness catch pages that might be written
>   * through non CPU mappings.
> + *
> + * @MMU_NOTIFY_RELEASE: used during mmu_range_notifier invalidate to signal that
> + * the mm refcount is zero and the range is no longer accessible.
>   */
>  enum mmu_notifier_event {
>  	MMU_NOTIFY_UNMAP = 0,
> @@ -39,6 +44,7 @@ enum mmu_notifier_event {
>  	MMU_NOTIFY_PROTECTION_VMA,
>  	MMU_NOTIFY_PROTECTION_PAGE,
>  	MMU_NOTIFY_SOFT_DIRTY,
> +	MMU_NOTIFY_RELEASE,
>  };
>  
>  #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0)
> @@ -222,6 +228,25 @@ struct mmu_notifier {
>  	unsigned int users;
>  };
>  
> +/**
> + * struct mmu_range_notifier_ops
> + * @invalidate: Upon return the caller must stop using any SPTEs within this
> + *              range, this function can sleep. Return false if blocking was
> + *              required but range is non-blocking
> + */
> +struct mmu_range_notifier_ops {
> +	bool (*invalidate)(struct mmu_range_notifier *mrn,
> +			   const struct mmu_notifier_range *range);
> +};
> +
> +struct mmu_range_notifier {
> +	struct interval_tree_node interval_tree;
> +	const struct mmu_range_notifier_ops *ops;
> +	struct hlist_node deferred_item;
> +	unsigned long invalidate_seq;
> +	struct mm_struct *mm;
> +};
> +
>  #ifdef CONFIG_MMU_NOTIFIER
>  
>  #ifdef CONFIG_LOCKDEP
> @@ -263,6 +288,59 @@ extern int __mmu_notifier_register(struct mmu_notifier *mn,
>  				   struct mm_struct *mm);
>  extern void mmu_notifier_unregister(struct mmu_notifier *mn,
>  				    struct mm_struct *mm);
> +
> +unsigned long mmu_range_read_begin(struct mmu_range_notifier *mrn);
> +int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
> +			      unsigned long start, unsigned long length,
> +			      struct mm_struct *mm);
> +int mmu_range_notifier_insert_locked(struct mmu_range_notifier *mrn,
> +				     unsigned long start, unsigned long length,
> +				     struct mm_struct *mm);
> +void mmu_range_notifier_remove(struct mmu_range_notifier *mrn);
> +
> +/**
> + * mmu_range_read_retry - End a read side critical section against a VA range
> + * mrn: The range under lock
> + * seq: The return of the paired mmu_range_read_begin()
> + *
> + * This MUST be called under a user provided lock that is also held
> + * unconditionally by op->invalidate(). That lock provides the required SMP
> + * barrier for handling invalidate_seq.
> + *
> + * Each call should be paired with a single mmu_range_read_begin() and
> + * should be used to conclude the read side.
> + *
> + * Returns true if an invalidation collided with this critical section, and
> + * the caller should retry.
> + */
> +static inline bool mmu_range_read_retry(struct mmu_range_notifier *mrn,
> +					unsigned long seq)
> +{
> +	return READ_ONCE(mrn->invalidate_seq) != seq;
> +}

What about calling this mmu_range_read_end() instead ? To match
with the mmu_range_read_begin().


> +
> +/**
> + * mmu_range_check_retry - Test if a collision has occurred
> + * mrn: The range under lock
> + * seq: The return of the matching mmu_range_read_begin()
> + *
> + * This can be used in the critical section between mmu_range_read_begin() and
> + * mmu_range_read_retry().  A return of true indicates an invalidation has
> + * collided with this lock and a future mmu_range_read_retry() will return
> + * true.
> + *
> + * False is not reliable and only suggests a collision has not happened. It
> + * can be called many times and does not have to hold the user provided lock.
> + *
> + * This call can be used as part of loops and other expensive operations to
> + * expedite a retry.
> + */
> +static inline bool mmu_range_check_retry(struct mmu_range_notifier *mrn,
> +					 unsigned long seq)
> +{
> +	return READ_ONCE(mrn->invalidate_seq) != seq;
> +}
> +
>  extern void __mmu_notifier_mm_destroy(struct mm_struct *mm);
>  extern void __mmu_notifier_release(struct mm_struct *mm);
>  extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
> diff --git a/mm/Kconfig b/mm/Kconfig
> index a5dae9a7eb510a..d0b5046d9aeffd 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -284,6 +284,7 @@ config VIRT_TO_BUS
>  config MMU_NOTIFIER
>  	bool
>  	select SRCU
> +	select INTERVAL_TREE
>  
>  config KSM
>  	bool "Enable KSM for page merging"
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 367670cfd02b7b..5e5e75ebcde4af 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -12,6 +12,7 @@
>  #include <linux/export.h>
>  #include <linux/mm.h>
>  #include <linux/err.h>
> +#include <linux/interval_tree.h>
>  #include <linux/srcu.h>
>  #include <linux/rcupdate.h>
>  #include <linux/sched.h>
> @@ -36,10 +37,243 @@ struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
>  struct mmu_notifier_mm {
>  	/* all mmu notifiers registered in this mm are queued in this list */
>  	struct hlist_head list;
> +	bool has_interval;
>  	/* to serialize the list modifications and hlist_unhashed */
>  	spinlock_t lock;
> +	unsigned long invalidate_seq;
> +	unsigned long active_invalidate_ranges;
> +	struct rb_root_cached itree;
> +	wait_queue_head_t wq;
> +	struct hlist_head deferred_list;
>  };
>  
> +/*
> + * This is a collision-retry read-side/write-side 'lock', a lot like a
> + * seqcount, however this allows multiple write-sides to hold it at
> + * once. Conceptually the write side is protecting the values of the PTEs in
> + * this mm, such that PTES cannot be read into SPTEs while any writer exists.
> + *
> + * Note that the core mm creates nested invalidate_range_start()/end() regions
> + * within the same thread, and runs invalidate_range_start()/end() in parallel
> + * on multiple CPUs. This is designed to not reduce concurrency or block
> + * progress on the mm side.
> + *
> + * As a secondary function, holding the full write side also serves to prevent
> + * writers for the itree, this is an optimization to avoid extra locking
> + * during invalidate_range_start/end notifiers.
> + *
> + * The write side has two states, fully excluded:
> + *  - mm->active_invalidate_ranges != 0
> + *  - mnn->invalidate_seq & 1 == True
> + *  - some range on the mm_struct is being invalidated
> + *  - the itree is not allowed to change
> + *
> + * And partially excluded:
> + *  - mm->active_invalidate_ranges != 0
> + *  - some range on the mm_struct is being invalidated
> + *  - the itree is allowed to change
> + *
> + * The later state avoids some expensive work on inv_end in the common case of
> + * no mrn monitoring the VA.
> + */
> +static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
> +{
> +	lockdep_assert_held(&mmn_mm->lock);
> +	return mmn_mm->invalidate_seq & 1;
> +}
> +
> +static struct mmu_range_notifier *
> +mn_itree_inv_start_range(struct mmu_notifier_mm *mmn_mm,
> +			 const struct mmu_notifier_range *range,
> +			 unsigned long *seq)
> +{
> +	struct interval_tree_node *node;
> +	struct mmu_range_notifier *res = NULL;
> +
> +	spin_lock(&mmn_mm->lock);
> +	mmn_mm->active_invalidate_ranges++;
> +	node = interval_tree_iter_first(&mmn_mm->itree, range->start,
> +					range->end - 1);
> +	if (node) {
> +		mmn_mm->invalidate_seq |= 1;
> +		res = container_of(node, struct mmu_range_notifier,
> +				   interval_tree);
> +	}
> +
> +	*seq = mmn_mm->invalidate_seq;
> +	spin_unlock(&mmn_mm->lock);
> +	return res;
> +}
> +
> +static struct mmu_range_notifier *
> +mn_itree_inv_next(struct mmu_range_notifier *mrn,
> +		  const struct mmu_notifier_range *range)
> +{
> +	struct interval_tree_node *node;
> +
> +	node = interval_tree_iter_next(&mrn->interval_tree, range->start,
> +				       range->end - 1);
> +	if (!node)
> +		return NULL;
> +	return container_of(node, struct mmu_range_notifier, interval_tree);
> +}
> +
> +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
> +{
> +	struct mmu_range_notifier *mrn;
> +	struct hlist_node *next;
> +	bool need_wake = false;
> +
> +	spin_lock(&mmn_mm->lock);
> +	if (--mmn_mm->active_invalidate_ranges ||
> +	    !mn_itree_is_invalidating(mmn_mm)) {
> +		spin_unlock(&mmn_mm->lock);
> +		return;
> +	}
> +
> +	mmn_mm->invalidate_seq++;
> +	need_wake = true;
> +
> +	/*
> +	 * The inv_end incorporates a deferred mechanism like rtnl. Adds and

The rtnl reference is lost on people unfamiliar with the network :)
code maybe like rtnl_lock()/rtnl_unlock() so people have a chance to
grep the right function. Assuming i am myself getting the right
reference :)

> +	 * removes are queued until the final inv_end happens then they are
> +	 * progressed. This arrangement for tree updates is used to avoid
> +	 * using a blocking lock during invalidate_range_start.
> +	 */
> +	hlist_for_each_entry_safe(mrn, next, &mmn_mm->deferred_list,
> +				  deferred_item) {
> +		if (RB_EMPTY_NODE(&mrn->interval_tree.rb))
> +			interval_tree_insert(&mrn->interval_tree,
> +					     &mmn_mm->itree);
> +		else
> +			interval_tree_remove(&mrn->interval_tree,
> +					     &mmn_mm->itree);
> +		hlist_del(&mrn->deferred_item);
> +	}
> +	spin_unlock(&mmn_mm->lock);
> +
> +	/*
> +	 * TODO: Since we already have a spinlock above, this would be faster
> +	 * as wake_up_q
> +	 */
> +	if (need_wake)
> +		wake_up_all(&mmn_mm->wq);
> +}
> +
> +/**
> + * mmu_range_read_begin - Begin a read side critical section against a VA range
> + * mrn: The range to lock
> + *
> + * mmu_range_read_begin()/mmu_range_read_retry() implement a collision-retry
> + * locking scheme similar to seqcount for the VA range under mrn. If the mm
> + * invokes invalidation during the critical section then
> + * mmu_range_read_retry() will return true.
> + *
> + * This is useful to obtain shadow PTEs where teardown or setup of the SPTEs
> + * require a blocking context.  The critical region formed by this lock can
> + * sleep, and the required 'user_lock' can also be a sleeping lock.
> + *
> + * The caller is required to provide a 'user_lock' to serialize both teardown
> + * and setup.
> + *
> + * The return value should be passed to mmu_range_read_retry().
> + */
> +unsigned long mmu_range_read_begin(struct mmu_range_notifier *mrn)
> +{
> +	struct mmu_notifier_mm *mmn_mm = mrn->mm->mmu_notifier_mm;
> +	unsigned long seq;
> +	bool is_invalidating;
> +
> +	/*
> +	 * If the mrn has a different seq value under the user_lock than we
> +	 * started with then it has collided.
> +	 *
> +	 * If the mrn currently has the same seq value as the mmn_mm seq, then
> +	 * it is currently between invalidate_start/end and is colliding.
> +	 *
> +	 * The locking looks broadly like this:
> +	 *   mn_tree_invalidate_start():          mmu_range_read_begin():
> +	 *                                         spin_lock
> +	 *                                          seq = READ_ONCE(mrn->invalidate_seq);
> +	 *                                          seq == mmn_mm->invalidate_seq
> +	 *                                         spin_unlock
> +	 *    spin_lock
> +	 *     seq = ++mmn_mm->invalidate_seq
> +	 *    spin_unlock
> +	 *    mrn->invalidate_seq = seq
> +	 *     op->invalidate_range():
> +	 *       user_lock
> +	 *       user_unlock
> +	 *
> +	 *                          [Required: mmu_range_read_retry() == true]
> +	 *
> +	 *   mn_itree_inv_end():
> +	 *    spin_lock
> +	 *     seq = ++mmn_mm->invalidate_seq
> +	 *    spin_unlock
> +	 *
> +	 *                                        user_lock
> +	 *                                         mmu_range_read_retry():
> +	 *                                          READ_ONCE(mrn->invalidate_seq) != seq
> +	 *                                        user_unlock
> +	 *
> +	 * Logically mrn->invalidate_seq is locked under the user provided
> +	 * lock, however the write is placed before that lock due to the way
> +	 * the API is layered.
> +	 *
> +	 * Barriers are not needed as any races here are closed by an eventual
> +	 * mmu_range_read_retry(), which provides a barrier via the user_lock.
> +	 */
> +	spin_lock(&mmn_mm->lock);
> +	seq = READ_ONCE(mrn->invalidate_seq);
> +	is_invalidating = seq == mmn_mm->invalidate_seq;
> +	spin_unlock(&mmn_mm->lock);
> +
> +	/*
> +	 * mrn->invalidate_seq is always set to an odd value. This ensures
> +	 * that if seq does wrap we will always clear the below sleep in some
> +	 * reasonable time as mmn_mm->invalidate_seq is even in the idle
> +	 * state.

I think this comment should be with the struct mmu_range_notifier
definition and you should just point to it from here as the same
comment would be useful down below.

> +	 */
> +	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> +	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
> +	if (is_invalidating)
> +		wait_event(mmn_mm->wq,
> +			   READ_ONCE(mmn_mm->invalidate_seq) != seq);
> +
> +	/*
> +	 * Notice that mmu_range_read_retry() can already be true at this
> +	 * point, avoiding loops here allows the user of this lock to provide
> +	 * a global time bound.
> +	 */
> +
> +	return seq;
> +}
> +EXPORT_SYMBOL_GPL(mmu_range_read_begin);
> +
> +static void mn_itree_release(struct mmu_notifier_mm *mmn_mm,
> +			     struct mm_struct *mm)
> +{
> +	struct mmu_notifier_range range = {
> +		.flags = MMU_NOTIFIER_RANGE_BLOCKABLE,
> +		.event = MMU_NOTIFY_RELEASE,
> +		.mm = mm,
> +		.start = 0,
> +		.end = ULONG_MAX,
> +	};
> +	struct mmu_range_notifier *mrn;
> +	unsigned long cur_seq;
> +	bool ret;
> +
> +	for (mrn = mn_itree_inv_start_range(mmn_mm, &range, &cur_seq); mrn;
> +	     mrn = mn_itree_inv_next(mrn, &range)) {
> +		ret = mrn->ops->invalidate(mrn, &range);
> +		WARN_ON(ret);
> +	}
> +
> +	mn_itree_inv_end(mmn_mm);
> +}
> +
>  /*
>   * This function can't run concurrently against mmu_notifier_register
>   * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap
> @@ -52,17 +286,24 @@ struct mmu_notifier_mm {
>   * can't go away from under us as exit_mmap holds an mm_count pin
>   * itself.
>   */
> -void __mmu_notifier_release(struct mm_struct *mm)
> +static void mn_hlist_release(struct mmu_notifier_mm *mmn_mm,
> +			     struct mm_struct *mm)
>  {
>  	struct mmu_notifier *mn;
>  	int id;
>  
> +	if (mmn_mm->has_interval)
> +		mn_itree_release(mmn_mm, mm);
> +
> +	if (hlist_empty(&mmn_mm->list))
> +		return;
> +
>  	/*
>  	 * SRCU here will block mmu_notifier_unregister until
>  	 * ->release returns.
>  	 */
>  	id = srcu_read_lock(&srcu);
> -	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist)
> +	hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist)
>  		/*
>  		 * If ->release runs before mmu_notifier_unregister it must be
>  		 * handled, as it's the only way for the driver to flush all
> @@ -72,9 +313,9 @@ void __mmu_notifier_release(struct mm_struct *mm)
>  		if (mn->ops->release)
>  			mn->ops->release(mn, mm);
>  
> -	spin_lock(&mm->mmu_notifier_mm->lock);
> -	while (unlikely(!hlist_empty(&mm->mmu_notifier_mm->list))) {
> -		mn = hlist_entry(mm->mmu_notifier_mm->list.first,
> +	spin_lock(&mmn_mm->lock);
> +	while (unlikely(!hlist_empty(&mmn_mm->list))) {
> +		mn = hlist_entry(mmn_mm->list.first,
>  				 struct mmu_notifier,
>  				 hlist);
>  		/*
> @@ -85,7 +326,7 @@ void __mmu_notifier_release(struct mm_struct *mm)
>  		 */
>  		hlist_del_init_rcu(&mn->hlist);
>  	}
> -	spin_unlock(&mm->mmu_notifier_mm->lock);
> +	spin_unlock(&mmn_mm->lock);
>  	srcu_read_unlock(&srcu, id);
>  
>  	/*
> @@ -100,6 +341,17 @@ void __mmu_notifier_release(struct mm_struct *mm)
>  	synchronize_srcu(&srcu);
>  }
>  
> +void __mmu_notifier_release(struct mm_struct *mm)
> +{
> +	struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm;
> +
> +	if (mmn_mm->has_interval)
> +		mn_itree_release(mmn_mm, mm);
> +
> +	if (!hlist_empty(&mmn_mm->list))
> +		mn_hlist_release(mmn_mm, mm);
> +}
> +
>  /*
>   * If no young bitflag is supported by the hardware, ->clear_flush_young can
>   * unmap the address and return 1 or 0 depending if the mapping previously
> @@ -172,14 +424,41 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
>  	srcu_read_unlock(&srcu, id);
>  }
>  
> -int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
> +static int mn_itree_invalidate(struct mmu_notifier_mm *mmn_mm,
> +				     const struct mmu_notifier_range *range)
> +{
> +	struct mmu_range_notifier *mrn;
> +	unsigned long cur_seq;
> +
> +	for (mrn = mn_itree_inv_start_range(mmn_mm, range, &cur_seq); mrn;
> +	     mrn = mn_itree_inv_next(mrn, range)) {
> +		bool ret;
> +
> +		WRITE_ONCE(mrn->invalidate_seq, cur_seq);
> +		ret = mrn->ops->invalidate(mrn, range);
> +		if (!ret && !WARN_ON(mmu_notifier_range_blockable(range)))

Isn't the logic wrong here ? We want to warn if the range
was mark as blockable and invalidate returned false. Also
we went to backoff no matter what if the invalidate return
false ie:
    if (!ret) {
        WARN_ON(mmu_notifier_range_blockable(range)))
        goto out_would_block;
    }


> +			goto out_would_block;
> +	}
> +	return 0;
> +
> +out_would_block:
> +	/*
> +	 * On -EAGAIN the non-blocking caller is not allowed to call
> +	 * invalidate_range_end()
> +	 */
> +	mn_itree_inv_end(mmn_mm);
> +	return -EAGAIN;
> +}
> +
> +static int mn_hlist_invalidate_range_start(struct mmu_notifier_mm *mmn_mm,
> +					   struct mmu_notifier_range *range)
>  {
>  	struct mmu_notifier *mn;
>  	int ret = 0;
>  	int id;
>  
>  	id = srcu_read_lock(&srcu);
> -	hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
> +	hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) {
>  		if (mn->ops->invalidate_range_start) {
>  			int _ret;
>  
> @@ -203,15 +482,30 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
>  	return ret;
>  }
>  
> -void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
> -					 bool only_end)
> +int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
> +{
> +	struct mmu_notifier_mm *mmn_mm = range->mm->mmu_notifier_mm;
> +	int ret = 0;
> +
> +	if (mmn_mm->has_interval) {
> +		ret = mn_itree_invalidate(mmn_mm, range);
> +		if (ret)
> +			return ret;
> +	}
> +	if (!hlist_empty(&mmn_mm->list))
> +		return mn_hlist_invalidate_range_start(mmn_mm, range);
> +	return 0;
> +}
> +
> +static void mn_hlist_invalidate_end(struct mmu_notifier_mm *mmn_mm,
> +				    struct mmu_notifier_range *range,
> +				    bool only_end)
>  {
>  	struct mmu_notifier *mn;
>  	int id;
>  
> -	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
>  	id = srcu_read_lock(&srcu);
> -	hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
> +	hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) {
>  		/*
>  		 * Call invalidate_range here too to avoid the need for the
>  		 * subsystem of having to register an invalidate_range_end
> @@ -238,6 +532,19 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
>  		}
>  	}
>  	srcu_read_unlock(&srcu, id);
> +}
> +
> +void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range,
> +					 bool only_end)
> +{
> +	struct mmu_notifier_mm *mmn_mm = range->mm->mmu_notifier_mm;
> +
> +	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> +	if (mmn_mm->has_interval)
> +		mn_itree_inv_end(mmn_mm);
> +
> +	if (!hlist_empty(&mmn_mm->list))
> +		mn_hlist_invalidate_end(mmn_mm, range, only_end);
>  	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
>  }
>  
> @@ -256,8 +563,9 @@ void __mmu_notifier_invalidate_range(struct mm_struct *mm,
>  }
>  
>  /*
> - * Same as mmu_notifier_register but here the caller must hold the
> - * mmap_sem in write mode.
> + * Same as mmu_notifier_register but here the caller must hold the mmap_sem in
> + * write mode. A NULL mn signals the notifier is being registered for itree
> + * mode.
>   */
>  int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
>  {
> @@ -274,9 +582,6 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
>  		fs_reclaim_release(GFP_KERNEL);
>  	}
>  
> -	mn->mm = mm;
> -	mn->users = 1;
> -
>  	if (!mm->mmu_notifier_mm) {
>  		/*
>  		 * kmalloc cannot be called under mm_take_all_locks(), but we
> @@ -284,21 +589,22 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
>  		 * the write side of the mmap_sem.
>  		 */
>  		mmu_notifier_mm =
> -			kmalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
> +			kzalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
>  		if (!mmu_notifier_mm)
>  			return -ENOMEM;
>  
>  		INIT_HLIST_HEAD(&mmu_notifier_mm->list);
>  		spin_lock_init(&mmu_notifier_mm->lock);
> +		mmu_notifier_mm->invalidate_seq = 2;

Why starting at 2 ?

> +		mmu_notifier_mm->itree = RB_ROOT_CACHED;
> +		init_waitqueue_head(&mmu_notifier_mm->wq);
> +		INIT_HLIST_HEAD(&mmu_notifier_mm->deferred_list);
>  	}
>  
>  	ret = mm_take_all_locks(mm);
>  	if (unlikely(ret))
>  		goto out_clean;
>  
> -	/* Pairs with the mmdrop in mmu_notifier_unregister_* */
> -	mmgrab(mm);
> -
>  	/*
>  	 * Serialize the update against mmu_notifier_unregister. A
>  	 * side note: mmu_notifier_release can't run concurrently with
> @@ -306,13 +612,26 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
>  	 * current->mm or explicitly with get_task_mm() or similar).
>  	 * We can't race against any other mmu notifier method either
>  	 * thanks to mm_take_all_locks().
> +	 *
> +	 * release semantics are provided for users not inside a lock covered
> +	 * by mm_take_all_locks(). acquire can only be used while holding the
> +	 * mmgrab or mmget, and is safe because once created the
> +	 * mmu_notififer_mm is not freed until the mm is destroyed.
>  	 */
>  	if (mmu_notifier_mm)
> -		mm->mmu_notifier_mm = mmu_notifier_mm;
> +		smp_store_release(&mm->mmu_notifier_mm, mmu_notifier_mm);

I do not understand why you need the release semantics here, we
are under the mmap_sem in write mode when we release it the lock
barrier will make sure anyone else sees the new mmu_notifier_mm

I fail to see the benefit or need for release/acquire semantics
here.

>  
> -	spin_lock(&mm->mmu_notifier_mm->lock);
> -	hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list);
> -	spin_unlock(&mm->mmu_notifier_mm->lock);
> +	if (mn) {
> +		/* Pairs with the mmdrop in mmu_notifier_unregister_* */
> +		mmgrab(mm);
> +		mn->mm = mm;
> +		mn->users = 1;
> +
> +		spin_lock(&mm->mmu_notifier_mm->lock);
> +		hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list);
> +		spin_unlock(&mm->mmu_notifier_mm->lock);
> +	} else
> +		mm->mmu_notifier_mm->has_interval = true;
>  
>  	mm_drop_all_locks(mm);
>  	BUG_ON(atomic_read(&mm->mm_users) <= 0);
> @@ -529,6 +848,166 @@ void mmu_notifier_put(struct mmu_notifier *mn)
>  }
>  EXPORT_SYMBOL_GPL(mmu_notifier_put);
>  
> +static int __mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
> +				       unsigned long start,
> +				       unsigned long length,
> +				       struct mmu_notifier_mm *mmn_mm,
> +				       struct mm_struct *mm)
> +{
> +	mrn->mm = mm;
> +	RB_CLEAR_NODE(&mrn->interval_tree.rb);
> +	mrn->interval_tree.start = start;
> +	/*
> +	 * Note that the representation of the intervals in the interval tree
> +	 * considers the ending point as contained in the interval.
> +	 */
> +	if (length == 0 ||
> +	    check_add_overflow(start, length - 1, &mrn->interval_tree.last))
> +		return -EOVERFLOW;
> +
> +	/* pairs with mmdrop in mmu_range_notifier_remove() */
> +	mmgrab(mm);
> +
> +	/*
> +	 * If some invalidate_range_start/end region is going on in parallel
> +	 * we don't know what VA ranges are affected, so we must assume this
> +	 * new range is included.
> +	 *
> +	 * If the itree is invalidating then we are not allowed to change
> +	 * it. Retrying until invalidation is done is tricky due to the
> +	 * possibility for live lock, instead defer the add to the unlock so
> +	 * this algorithm is deterministic.
> +	 *
> +	 * In all cases the value for the mrn->mr_invalidate_seq should be
> +	 * odd, see mmu_range_read_begin()
> +	 */
> +	spin_lock(&mmn_mm->lock);
> +	if (mmn_mm->active_invalidate_ranges) {
> +		if (mn_itree_is_invalidating(mmn_mm))
> +			hlist_add_head(&mrn->deferred_item,
> +				       &mmn_mm->deferred_list);
> +		else {
> +			mmn_mm->invalidate_seq |= 1;
> +			interval_tree_insert(&mrn->interval_tree,
> +					     &mmn_mm->itree);
> +		}
> +		mrn->invalidate_seq = mmn_mm->invalidate_seq;
> +	} else {
> +		WARN_ON(mn_itree_is_invalidating(mmn_mm));
> +		mrn->invalidate_seq = mmn_mm->invalidate_seq - 1;
> +		interval_tree_insert(&mrn->interval_tree, &mmn_mm->itree);
> +	}
> +	spin_unlock(&mmn_mm->lock);
> +	return 0;
> +}
> +
> +/**
> + * mmu_range_notifier_insert - Insert a range notifier
> + * @mrn: Range notifier to register
> + * @start: Starting virtual address to monitor
> + * @length: Length of the range to monitor
> + * @mm : mm_struct to attach to
> + *
> + * This function subscribes the range notifier for notifications from the mm.
> + * Upon return the ops related to mmu_range_notifier will be called whenever
> + * an event that intersects with the given range occurs.
> + *
> + * Upon return the range_notifier may not be present in the interval tree yet.
> + * The caller must use the normal range notifier locking flow via
> + * mmu_range_read_begin() to establish SPTEs for this range.
> + */
> +int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
> +			      unsigned long start, unsigned long length,
> +			      struct mm_struct *mm)
> +{
> +	struct mmu_notifier_mm *mmn_mm;
> +	int ret;
> +
> +	might_lock(&mm->mmap_sem);
> +
> +	mmn_mm = smp_load_acquire(&mm->mmu_notifier_mm);
> +	if (!mmn_mm || !mmn_mm->has_interval) {
> +		ret = mmu_notifier_register(NULL, mm);
> +		if (ret)
> +			return ret;
> +		mmn_mm = mm->mmu_notifier_mm;
> +	}
> +	return __mmu_range_notifier_insert(mrn, start, length, mmn_mm, mm);
> +}
> +EXPORT_SYMBOL_GPL(mmu_range_notifier_insert);
> +
> +int mmu_range_notifier_insert_locked(struct mmu_range_notifier *mrn,
> +				     unsigned long start, unsigned long length,
> +				     struct mm_struct *mm)
> +{
> +	struct mmu_notifier_mm *mmn_mm;
> +	int ret;
> +
> +	lockdep_assert_held_write(&mm->mmap_sem);
> +
> +	mmn_mm = mm->mmu_notifier_mm;

Shouldn't you be using smp_load_acquire() ?

> +	if (!mmn_mm || !mmn_mm->has_interval) {
> +		ret = __mmu_notifier_register(NULL, mm);
> +		if (ret)
> +			return ret;
> +		mmn_mm = mm->mmu_notifier_mm;
> +	}
> +	return __mmu_range_notifier_insert(mrn, start, length, mmn_mm, mm);
> +}
> +EXPORT_SYMBOL_GPL(mmu_range_notifier_insert_locked);
> +
> +/**
> + * mmu_range_notifier_remove - Remove a range notifier
> + * @mrn: Range notifier to unregister
> + *
> + * This function must be paired with mmu_range_notifier_insert(). It cannot be
> + * called from any ops callback.
> + *
> + * Once this returns ops callbacks are no longer running on other CPUs and
> + * will not be called in future.
> + */
> +void mmu_range_notifier_remove(struct mmu_range_notifier *mrn)
> +{
> +	struct mm_struct *mm = mrn->mm;
> +	struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm;
> +	unsigned long seq = 0;
> +
> +	might_sleep();
> +
> +	spin_lock(&mmn_mm->lock);
> +	if (mn_itree_is_invalidating(mmn_mm)) {
> +		/*
> +		 * remove is being called after insert put this on the
> +		 * deferred list, but before the deferred list was processed.
> +		 */
> +		if (RB_EMPTY_NODE(&mrn->interval_tree.rb)) {
> +			hlist_del(&mrn->deferred_item);
> +		} else {
> +			hlist_add_head(&mrn->deferred_item,
> +				       &mmn_mm->deferred_list);
> +			seq = mmn_mm->invalidate_seq;
> +		}
> +	} else {
> +		WARN_ON(RB_EMPTY_NODE(&mrn->interval_tree.rb));
> +		interval_tree_remove(&mrn->interval_tree, &mmn_mm->itree);
> +	}
> +	spin_unlock(&mmn_mm->lock);
> +
> +	/*
> +	 * The possible sleep on progress in the invalidation requires the
> +	 * caller not hold any locks held by invalidation callbacks.
> +	 */
> +	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> +	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
> +	if (seq)
> +		wait_event(mmn_mm->wq,
> +			   READ_ONCE(mmn_mm->invalidate_seq) != seq);
> +
> +	/* pairs with mmgrab in mmu_range_notifier_insert() */
> +	mmdrop(mm);
> +}
> +EXPORT_SYMBOL_GPL(mmu_range_notifier_remove);
> +
>  /**
>   * mmu_notifier_synchronize - Ensure all mmu_notifiers are freed
>   *
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled
  2019-10-15 18:12 ` [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe
@ 2019-10-21 18:31   ` Jerome Glisse
  0 siblings, 0 replies; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 18:31 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Jason Gunthorpe

On Tue, Oct 15, 2019 at 03:12:31PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> Only the function calls are stubbed out with static inlines that always
> fail. This is the standard way to write a header for an optional component
> and makes it easier for drivers that only optionally need HMM_MIRROR.
> 
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Reviewed-by: Jérôme Glisse <jglisse@redhat.com>

> ---
>  include/linux/hmm.h | 59 ++++++++++++++++++++++++++++++++++++---------
>  kernel/fork.c       |  1 -
>  2 files changed, 47 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 8ac1fd6a81af8f..2666eb08a40615 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -62,8 +62,6 @@
>  #include <linux/kconfig.h>
>  #include <asm/pgtable.h>
>  
> -#ifdef CONFIG_HMM_MIRROR
> -
>  #include <linux/device.h>
>  #include <linux/migrate.h>
>  #include <linux/memremap.h>
> @@ -374,6 +372,15 @@ struct hmm_mirror {
>  	struct list_head		list;
>  };
>  
> +/*
> + * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
> + */
> +#define HMM_FAULT_ALLOW_RETRY		(1 << 0)
> +
> +/* Don't fault in missing PTEs, just snapshot the current state. */
> +#define HMM_FAULT_SNAPSHOT		(1 << 1)
> +
> +#ifdef CONFIG_HMM_MIRROR
>  int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm);
>  void hmm_mirror_unregister(struct hmm_mirror *mirror);
>  
> @@ -383,14 +390,6 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror);
>  int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror);
>  void hmm_range_unregister(struct hmm_range *range);
>  
> -/*
> - * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
> - */
> -#define HMM_FAULT_ALLOW_RETRY		(1 << 0)
> -
> -/* Don't fault in missing PTEs, just snapshot the current state. */
> -#define HMM_FAULT_SNAPSHOT		(1 << 1)
> -
>  long hmm_range_fault(struct hmm_range *range, unsigned int flags);
>  
>  long hmm_range_dma_map(struct hmm_range *range,
> @@ -401,6 +400,44 @@ long hmm_range_dma_unmap(struct hmm_range *range,
>  			 struct device *device,
>  			 dma_addr_t *daddrs,
>  			 bool dirty);
> +#else
> +int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +void hmm_mirror_unregister(struct hmm_mirror *mirror)
> +{
> +}
> +
> +int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +void hmm_range_unregister(struct hmm_range *range)
> +{
> +}
> +
> +static inline long hmm_range_fault(struct hmm_range *range, unsigned int flags)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline long hmm_range_dma_map(struct hmm_range *range,
> +				     struct device *device, dma_addr_t *daddrs,
> +				     unsigned int flags)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline long hmm_range_dma_unmap(struct hmm_range *range,
> +				       struct device *device,
> +				       dma_addr_t *daddrs, bool dirty)
> +{
> +	return -EOPNOTSUPP;
> +}
> +#endif
>  
>  /*
>   * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range
> @@ -411,6 +448,4 @@ long hmm_range_dma_unmap(struct hmm_range *range,
>   */
>  #define HMM_RANGE_DEFAULT_TIMEOUT 1000
>  
> -#endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
> -
>  #endif /* LINUX_HMM_H */
> diff --git a/kernel/fork.c b/kernel/fork.c
> index f9572f41612628..4561a65d19db88 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -40,7 +40,6 @@
>  #include <linux/binfmts.h>
>  #include <linux/mman.h>
>  #include <linux/mmu_notifier.h>
> -#include <linux/hmm.h>
>  #include <linux/fs.h>
>  #include <linux/mm.h>
>  #include <linux/vmacache.h>
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled
  2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
@ 2019-10-21 18:32   ` Jerome Glisse
  0 siblings, 0 replies; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 18:32 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Jason Gunthorpe

On Tue, Oct 15, 2019 at 03:12:28PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> Now that we have KERNEL_HEADER_TEST all headers are generally compile
> tested, so relying on makefile tricks to avoid compiling code that depends
> on CONFIG_MMU_NOTIFIER is more annoying.
> 
> Instead follow the usual pattern and provide most of the header with only
> the functions stubbed out when CONFIG_MMU_NOTIFIER is disabled. This
> ensures code compiles no matter what the config setting is.
> 
> While here, struct mmu_notifier_mm is private to mmu_notifier.c, move it.
> 
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Reviewed-by: Jérôme Glisse <jglisse@redhat.com>

> ---
>  include/linux/mmu_notifier.h | 46 +++++++++++++-----------------------
>  mm/mmu_notifier.c            | 13 ++++++++++
>  2 files changed, 30 insertions(+), 29 deletions(-)
> 
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index 1bd8e6a09a3c27..12bd603d318ce7 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -7,8 +7,9 @@
>  #include <linux/mm_types.h>
>  #include <linux/srcu.h>
>  
> +struct mmu_notifier_mm;
>  struct mmu_notifier;
> -struct mmu_notifier_ops;
> +struct mmu_notifier_range;
>  
>  /**
>   * enum mmu_notifier_event - reason for the mmu notifier callback
> @@ -40,36 +41,8 @@ enum mmu_notifier_event {
>  	MMU_NOTIFY_SOFT_DIRTY,
>  };
>  
> -#ifdef CONFIG_MMU_NOTIFIER
> -
> -#ifdef CONFIG_LOCKDEP
> -extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> -#endif
> -
> -/*
> - * The mmu notifier_mm structure is allocated and installed in
> - * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> - * critical section and it's released only when mm_count reaches zero
> - * in mmdrop().
> - */
> -struct mmu_notifier_mm {
> -	/* all mmu notifiers registerd in this mm are queued in this list */
> -	struct hlist_head list;
> -	/* to serialize the list modifications and hlist_unhashed */
> -	spinlock_t lock;
> -};
> -
>  #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0)
>  
> -struct mmu_notifier_range {
> -	struct vm_area_struct *vma;
> -	struct mm_struct *mm;
> -	unsigned long start;
> -	unsigned long end;
> -	unsigned flags;
> -	enum mmu_notifier_event event;
> -};
> -
>  struct mmu_notifier_ops {
>  	/*
>  	 * Called either by mmu_notifier_unregister or when the mm is
> @@ -249,6 +222,21 @@ struct mmu_notifier {
>  	unsigned int users;
>  };
>  
> +#ifdef CONFIG_MMU_NOTIFIER
> +
> +#ifdef CONFIG_LOCKDEP
> +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> +#endif
> +
> +struct mmu_notifier_range {
> +	struct vm_area_struct *vma;
> +	struct mm_struct *mm;
> +	unsigned long start;
> +	unsigned long end;
> +	unsigned flags;
> +	enum mmu_notifier_event event;
> +};
> +
>  static inline int mm_has_notifiers(struct mm_struct *mm)
>  {
>  	return unlikely(mm->mmu_notifier_mm);
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 7fde88695f35d6..367670cfd02b7b 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -27,6 +27,19 @@ struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
>  };
>  #endif
>  
> +/*
> + * The mmu notifier_mm structure is allocated and installed in
> + * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> + * critical section and it's released only when mm_count reaches zero
> + * in mmdrop().
> + */
> +struct mmu_notifier_mm {
> +	/* all mmu notifiers registered in this mm are queued in this list */
> +	struct hlist_head list;
> +	/* to serialize the list modifications and hlist_unhashed */
> +	spinlock_t lock;
> +};
> +
>  /*
>   * This function can't run concurrently against mmu_notifier_register
>   * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror
  2019-10-15 18:12 ` [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror Jason Gunthorpe
@ 2019-10-21 18:33   ` Jerome Glisse
  0 siblings, 0 replies; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 18:33 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Jason Gunthorpe

On Tue, Oct 15, 2019 at 03:12:30PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> hmm_mirror's handling of ranges does not use a sequence count which
> results in this bug:
> 
>          CPU0                                   CPU1
>                                      hmm_range_wait_until_valid(range)
>                                          valid == true
>                                      hmm_range_fault(range)
> hmm_invalidate_range_start()
>    range->valid = false
> hmm_invalidate_range_end()
>    range->valid = true
>                                      hmm_range_valid(range)
>                                           valid == true
> 
> Where the hmm_range_valid should not have succeeded.
> 
> Adding the required sequence count would make it nearly identical to the
> new mmu_range_notifier. Instead replace the hmm_mirror stuff with
> mmu_range_notifier.
> 
> Co-existence of the two APIs is the first step.
> 
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Reviewed-by: Jérôme Glisse <jglisse@redhat.com>

> ---
>  include/linux/hmm.h |  5 +++++
>  mm/hmm.c            | 25 +++++++++++++++++++------
>  2 files changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 3fec513b9c00f1..8ac1fd6a81af8f 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -145,6 +145,9 @@ enum hmm_pfn_value_e {
>  /*
>   * struct hmm_range - track invalidation lock on virtual address range
>   *
> + * @notifier: an optional mmu_range_notifier
> + * @notifier_seq: when notifier is used this is the result of
> + *                mmu_range_read_begin()
>   * @hmm: the core HMM structure this range is active against
>   * @vma: the vm area struct for the range
>   * @list: all range lock are on a list
> @@ -159,6 +162,8 @@ enum hmm_pfn_value_e {
>   * @valid: pfns array did not change since it has been fill by an HMM function
>   */
>  struct hmm_range {
> +	struct mmu_range_notifier *notifier;
> +	unsigned long		notifier_seq;
>  	struct hmm		*hmm;
>  	struct list_head	list;
>  	unsigned long		start;
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 902f5fa6bf93ad..22ac3595771feb 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -852,6 +852,14 @@ void hmm_range_unregister(struct hmm_range *range)
>  }
>  EXPORT_SYMBOL(hmm_range_unregister);
>  
> +static bool needs_retry(struct hmm_range *range)
> +{
> +	if (range->notifier)
> +		return mmu_range_check_retry(range->notifier,
> +					     range->notifier_seq);
> +	return !range->valid;
> +}
> +
>  static const struct mm_walk_ops hmm_walk_ops = {
>  	.pud_entry	= hmm_vma_walk_pud,
>  	.pmd_entry	= hmm_vma_walk_pmd,
> @@ -892,18 +900,23 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
>  	const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP;
>  	unsigned long start = range->start, end;
>  	struct hmm_vma_walk hmm_vma_walk;
> -	struct hmm *hmm = range->hmm;
> +	struct mm_struct *mm;
>  	struct vm_area_struct *vma;
>  	int ret;
>  
> -	lockdep_assert_held(&hmm->mmu_notifier.mm->mmap_sem);
> +	if (range->notifier)
> +		mm = range->notifier->mm;
> +	else
> +		mm = range->hmm->mmu_notifier.mm;
> +
> +	lockdep_assert_held(&mm->mmap_sem);
>  
>  	do {
>  		/* If range is no longer valid force retry. */
> -		if (!range->valid)
> +		if (needs_retry(range))
>  			return -EBUSY;
>  
> -		vma = find_vma(hmm->mmu_notifier.mm, start);
> +		vma = find_vma(mm, start);
>  		if (vma == NULL || (vma->vm_flags & device_vma))
>  			return -EFAULT;
>  
> @@ -933,7 +946,7 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
>  			start = hmm_vma_walk.last;
>  
>  			/* Keep trying while the range is valid. */
> -		} while (ret == -EBUSY && range->valid);
> +		} while (ret == -EBUSY && !needs_retry(range));
>  
>  		if (ret) {
>  			unsigned long i;
> @@ -991,7 +1004,7 @@ long hmm_range_dma_map(struct hmm_range *range, struct device *device,
>  			continue;
>  
>  		/* Check if range is being invalidated */
> -		if (!range->valid) {
> +		if (needs_retry(range)) {
>  			ret = -EBUSY;
>  			goto unmap;
>  		}
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related
  2019-10-15 18:12 ` [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related Jason Gunthorpe
@ 2019-10-21 18:38   ` Jerome Glisse
  2019-10-21 18:57     ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 18:38 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Jason Gunthorpe

On Tue, Oct 15, 2019 at 03:12:42PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> The only two users of this are now converted to use mmu_range_notifier,
> delete all the code and update hmm.rst.

I guess i should point out that the reasons for hmm_mirror and hmm
was for:
    1) Maybe define a common API for userspace to provide memory
       placement hints (NUMA for GPU)
    2) multi-devices sharing same mirror page table

But support for multi-GPU in nouveau is way behind and i guess such
optimization will have to re-materialize what is necessary once that
happens.

Note this patch should also update kernel/fork.c and the mm_struct
definition AFAICT. With those changes you can add my:

Reviewed-by: Jérôme Glisse <jglisse@redhat.com>

> 
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
> ---
>  Documentation/vm/hmm.rst | 105 ++++-----------
>  include/linux/hmm.h      | 183 +------------------------
>  mm/Kconfig               |   1 -
>  mm/hmm.c                 | 284 +--------------------------------------
>  4 files changed, 33 insertions(+), 540 deletions(-)
> 
> diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
> index 0a5960beccf76d..a247643035c4e2 100644
> --- a/Documentation/vm/hmm.rst
> +++ b/Documentation/vm/hmm.rst
> @@ -147,49 +147,16 @@ Address space mirroring implementation and API
>  Address space mirroring's main objective is to allow duplication of a range of
>  CPU page table into a device page table; HMM helps keep both synchronized. A
>  device driver that wants to mirror a process address space must start with the
> -registration of an hmm_mirror struct::
> -
> - int hmm_mirror_register(struct hmm_mirror *mirror,
> -                         struct mm_struct *mm);
> -
> -The mirror struct has a set of callbacks that are used
> -to propagate CPU page tables::
> -
> - struct hmm_mirror_ops {
> -     /* release() - release hmm_mirror
> -      *
> -      * @mirror: pointer to struct hmm_mirror
> -      *
> -      * This is called when the mm_struct is being released.  The callback
> -      * must ensure that all access to any pages obtained from this mirror
> -      * is halted before the callback returns. All future access should
> -      * fault.
> -      */
> -     void (*release)(struct hmm_mirror *mirror);
> -
> -     /* sync_cpu_device_pagetables() - synchronize page tables
> -      *
> -      * @mirror: pointer to struct hmm_mirror
> -      * @update: update information (see struct mmu_notifier_range)
> -      * Return: -EAGAIN if update.blockable false and callback need to
> -      *         block, 0 otherwise.
> -      *
> -      * This callback ultimately originates from mmu_notifiers when the CPU
> -      * page table is updated. The device driver must update its page table
> -      * in response to this callback. The update argument tells what action
> -      * to perform.
> -      *
> -      * The device driver must not return from this callback until the device
> -      * page tables are completely updated (TLBs flushed, etc); this is a
> -      * synchronous call.
> -      */
> -     int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror,
> -                                       const struct hmm_update *update);
> - };
> -
> -The device driver must perform the update action to the range (mark range
> -read only, or fully unmap, etc.). The device must complete the update before
> -the driver callback returns.
> +registration of a mmu_range_notifier::
> +
> + mrn->ops = &driver_ops;
> + int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
> +			      unsigned long start, unsigned long length,
> +			      struct mm_struct *mm);
> +
> +During the driver_ops->invalidate() callback the device driver must perform
> +the update action to the range (mark range read only, or fully unmap,
> +etc.). The device must complete the update before the driver callback returns.
>  
>  When the device driver wants to populate a range of virtual addresses, it can
>  use::
> @@ -216,70 +183,46 @@ The usage pattern is::
>        struct hmm_range range;
>        ...
>  
> +      range.notifier = &mrn;
>        range.start = ...;
>        range.end = ...;
>        range.pfns = ...;
>        range.flags = ...;
>        range.values = ...;
>        range.pfn_shift = ...;
> -      hmm_range_register(&range, mirror);
>  
> -      /*
> -       * Just wait for range to be valid, safe to ignore return value as we
> -       * will use the return value of hmm_range_fault() below under the
> -       * mmap_sem to ascertain the validity of the range.
> -       */
> -      hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
> +      if (!mmget_not_zero(mrn->notifier.mm))
> +          return -EFAULT;
>  
>   again:
> +      range.notifier_seq = mmu_range_read_begin(&mrn);
>        down_read(&mm->mmap_sem);
>        ret = hmm_range_fault(&range, HMM_RANGE_SNAPSHOT);
>        if (ret) {
>            up_read(&mm->mmap_sem);
> -          if (ret == -EBUSY) {
> -            /*
> -             * No need to check hmm_range_wait_until_valid() return value
> -             * on retry we will get proper error with hmm_range_fault()
> -             */
> -            hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
> -            goto again;
> -          }
> -          hmm_range_unregister(&range);
> +          if (ret == -EBUSY)
> +                 goto again;
>            return ret;
>        }
> +      up_read(&mm->mmap_sem);
> +
>        take_lock(driver->update);
> -      if (!hmm_range_valid(&range)) {
> +      if (mmu_range_read_retry(&mrn, range.notifier_seq) {
>            release_lock(driver->update);
> -          up_read(&mm->mmap_sem);
>            goto again;
>        }
>  
> -      // Use pfns array content to update device page table
> +      /* Use pfns array content to update device page table,
> +       * under the update lock */
>  
> -      hmm_range_unregister(&range);
>        release_lock(driver->update);
> -      up_read(&mm->mmap_sem);
>        return 0;
>   }
>  
>  The driver->update lock is the same lock that the driver takes inside its
> -sync_cpu_device_pagetables() callback. That lock must be held before calling
> -hmm_range_valid() to avoid any race with a concurrent CPU page table update.
> -
> -HMM implements all this on top of the mmu_notifier API because we wanted a
> -simpler API and also to be able to perform optimizations latter on like doing
> -concurrent device updates in multi-devices scenario.
> -
> -HMM also serves as an impedance mismatch between how CPU page table updates
> -are done (by CPU write to the page table and TLB flushes) and how devices
> -update their own page table. Device updates are a multi-step process. First,
> -appropriate commands are written to a buffer, then this buffer is scheduled for
> -execution on the device. It is only once the device has executed commands in
> -the buffer that the update is done. Creating and scheduling the update command
> -buffer can happen concurrently for multiple devices. Waiting for each device to
> -report commands as executed is serialized (there is no point in doing this
> -concurrently).
> -
> +invalidate() callback. That lock must be held before calling
> +mmu_range_read_retry() to avoid any race with a concurrent CPU page table
> +update.
>  
>  Leverage default_flags and pfn_flags_mask
>  =========================================
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 2666eb08a40615..b4af5173523232 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -68,29 +68,6 @@
>  #include <linux/completion.h>
>  #include <linux/mmu_notifier.h>
>  
> -
> -/*
> - * struct hmm - HMM per mm struct
> - *
> - * @mm: mm struct this HMM struct is bound to
> - * @lock: lock protecting ranges list
> - * @ranges: list of range being snapshotted
> - * @mirrors: list of mirrors for this mm
> - * @mmu_notifier: mmu notifier to track updates to CPU page table
> - * @mirrors_sem: read/write semaphore protecting the mirrors list
> - * @wq: wait queue for user waiting on a range invalidation
> - * @notifiers: count of active mmu notifiers
> - */
> -struct hmm {
> -	struct mmu_notifier	mmu_notifier;
> -	spinlock_t		ranges_lock;
> -	struct list_head	ranges;
> -	struct list_head	mirrors;
> -	struct rw_semaphore	mirrors_sem;
> -	wait_queue_head_t	wq;
> -	long			notifiers;
> -};
> -
>  /*
>   * hmm_pfn_flag_e - HMM flag enums
>   *
> @@ -143,9 +120,8 @@ enum hmm_pfn_value_e {
>  /*
>   * struct hmm_range - track invalidation lock on virtual address range
>   *
> - * @notifier: an optional mmu_range_notifier
> - * @notifier_seq: when notifier is used this is the result of
> - *                mmu_range_read_begin()
> + * @notifier: a mmu_range_notifier that includes the start/end
> + * @notifier_seq: result of mmu_range_read_begin()
>   * @hmm: the core HMM structure this range is active against
>   * @vma: the vm area struct for the range
>   * @list: all range lock are on a list
> @@ -162,8 +138,6 @@ enum hmm_pfn_value_e {
>  struct hmm_range {
>  	struct mmu_range_notifier *notifier;
>  	unsigned long		notifier_seq;
> -	struct hmm		*hmm;
> -	struct list_head	list;
>  	unsigned long		start;
>  	unsigned long		end;
>  	uint64_t		*pfns;
> @@ -172,32 +146,8 @@ struct hmm_range {
>  	uint64_t		default_flags;
>  	uint64_t		pfn_flags_mask;
>  	uint8_t			pfn_shift;
> -	bool			valid;
>  };
>  
> -/*
> - * hmm_range_wait_until_valid() - wait for range to be valid
> - * @range: range affected by invalidation to wait on
> - * @timeout: time out for wait in ms (ie abort wait after that period of time)
> - * Return: true if the range is valid, false otherwise.
> - */
> -static inline bool hmm_range_wait_until_valid(struct hmm_range *range,
> -					      unsigned long timeout)
> -{
> -	return wait_event_timeout(range->hmm->wq, range->valid,
> -				  msecs_to_jiffies(timeout)) != 0;
> -}
> -
> -/*
> - * hmm_range_valid() - test if a range is valid or not
> - * @range: range
> - * Return: true if the range is valid, false otherwise.
> - */
> -static inline bool hmm_range_valid(struct hmm_range *range)
> -{
> -	return range->valid;
> -}
> -
>  /*
>   * hmm_device_entry_to_page() - return struct page pointed to by a device entry
>   * @range: range use to decode device entry value
> @@ -267,111 +217,6 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range,
>  		range->flags[HMM_PFN_VALID];
>  }
>  
> -/*
> - * Mirroring: how to synchronize device page table with CPU page table.
> - *
> - * A device driver that is participating in HMM mirroring must always
> - * synchronize with CPU page table updates. For this, device drivers can either
> - * directly use mmu_notifier APIs or they can use the hmm_mirror API. Device
> - * drivers can decide to register one mirror per device per process, or just
> - * one mirror per process for a group of devices. The pattern is:
> - *
> - *      int device_bind_address_space(..., struct mm_struct *mm, ...)
> - *      {
> - *          struct device_address_space *das;
> - *
> - *          // Device driver specific initialization, and allocation of das
> - *          // which contains an hmm_mirror struct as one of its fields.
> - *          ...
> - *
> - *          ret = hmm_mirror_register(&das->mirror, mm, &device_mirror_ops);
> - *          if (ret) {
> - *              // Cleanup on error
> - *              return ret;
> - *          }
> - *
> - *          // Other device driver specific initialization
> - *          ...
> - *      }
> - *
> - * Once an hmm_mirror is registered for an address space, the device driver
> - * will get callbacks through sync_cpu_device_pagetables() operation (see
> - * hmm_mirror_ops struct).
> - *
> - * Device driver must not free the struct containing the hmm_mirror struct
> - * before calling hmm_mirror_unregister(). The expected usage is to do that when
> - * the device driver is unbinding from an address space.
> - *
> - *
> - *      void device_unbind_address_space(struct device_address_space *das)
> - *      {
> - *          // Device driver specific cleanup
> - *          ...
> - *
> - *          hmm_mirror_unregister(&das->mirror);
> - *
> - *          // Other device driver specific cleanup, and now das can be freed
> - *          ...
> - *      }
> - */
> -
> -struct hmm_mirror;
> -
> -/*
> - * struct hmm_mirror_ops - HMM mirror device operations callback
> - *
> - * @update: callback to update range on a device
> - */
> -struct hmm_mirror_ops {
> -	/* release() - release hmm_mirror
> -	 *
> -	 * @mirror: pointer to struct hmm_mirror
> -	 *
> -	 * This is called when the mm_struct is being released.  The callback
> -	 * must ensure that all access to any pages obtained from this mirror
> -	 * is halted before the callback returns. All future access should
> -	 * fault.
> -	 */
> -	void (*release)(struct hmm_mirror *mirror);
> -
> -	/* sync_cpu_device_pagetables() - synchronize page tables
> -	 *
> -	 * @mirror: pointer to struct hmm_mirror
> -	 * @update: update information (see struct mmu_notifier_range)
> -	 * Return: -EAGAIN if mmu_notifier_range_blockable(update) is false
> -	 * and callback needs to block, 0 otherwise.
> -	 *
> -	 * This callback ultimately originates from mmu_notifiers when the CPU
> -	 * page table is updated. The device driver must update its page table
> -	 * in response to this callback. The update argument tells what action
> -	 * to perform.
> -	 *
> -	 * The device driver must not return from this callback until the device
> -	 * page tables are completely updated (TLBs flushed, etc); this is a
> -	 * synchronous call.
> -	 */
> -	int (*sync_cpu_device_pagetables)(
> -		struct hmm_mirror *mirror,
> -		const struct mmu_notifier_range *update);
> -};
> -
> -/*
> - * struct hmm_mirror - mirror struct for a device driver
> - *
> - * @hmm: pointer to struct hmm (which is unique per mm_struct)
> - * @ops: device driver callback for HMM mirror operations
> - * @list: for list of mirrors of a given mm
> - *
> - * Each address space (mm_struct) being mirrored by a device must register one
> - * instance of an hmm_mirror struct with HMM. HMM will track the list of all
> - * mirrors for each mm_struct.
> - */
> -struct hmm_mirror {
> -	struct hmm			*hmm;
> -	const struct hmm_mirror_ops	*ops;
> -	struct list_head		list;
> -};
> -
>  /*
>   * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
>   */
> @@ -381,15 +226,9 @@ struct hmm_mirror {
>  #define HMM_FAULT_SNAPSHOT		(1 << 1)
>  
>  #ifdef CONFIG_HMM_MIRROR
> -int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm);
> -void hmm_mirror_unregister(struct hmm_mirror *mirror);
> -
>  /*
>   * Please see Documentation/vm/hmm.rst for how to use the range API.
>   */
> -int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror);
> -void hmm_range_unregister(struct hmm_range *range);
> -
>  long hmm_range_fault(struct hmm_range *range, unsigned int flags);
>  
>  long hmm_range_dma_map(struct hmm_range *range,
> @@ -401,24 +240,6 @@ long hmm_range_dma_unmap(struct hmm_range *range,
>  			 dma_addr_t *daddrs,
>  			 bool dirty);
>  #else
> -int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm)
> -{
> -	return -EOPNOTSUPP;
> -}
> -
> -void hmm_mirror_unregister(struct hmm_mirror *mirror)
> -{
> -}
> -
> -int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror)
> -{
> -	return -EOPNOTSUPP;
> -}
> -
> -void hmm_range_unregister(struct hmm_range *range)
> -{
> -}
> -
>  static inline long hmm_range_fault(struct hmm_range *range, unsigned int flags)
>  {
>  	return -EOPNOTSUPP;
> diff --git a/mm/Kconfig b/mm/Kconfig
> index d0b5046d9aeffd..e38ff1d5968dbf 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -675,7 +675,6 @@ config DEV_PAGEMAP_OPS
>  config HMM_MIRROR
>  	bool
>  	depends on MMU
> -	depends on MMU_NOTIFIER
>  
>  config DEVICE_PRIVATE
>  	bool "Unaddressable device memory (GPU memory, ...)"
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 22ac3595771feb..75d15a820e182e 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -26,193 +26,6 @@
>  #include <linux/mmu_notifier.h>
>  #include <linux/memory_hotplug.h>
>  
> -static struct mmu_notifier *hmm_alloc_notifier(struct mm_struct *mm)
> -{
> -	struct hmm *hmm;
> -
> -	hmm = kzalloc(sizeof(*hmm), GFP_KERNEL);
> -	if (!hmm)
> -		return ERR_PTR(-ENOMEM);
> -
> -	init_waitqueue_head(&hmm->wq);
> -	INIT_LIST_HEAD(&hmm->mirrors);
> -	init_rwsem(&hmm->mirrors_sem);
> -	INIT_LIST_HEAD(&hmm->ranges);
> -	spin_lock_init(&hmm->ranges_lock);
> -	hmm->notifiers = 0;
> -	return &hmm->mmu_notifier;
> -}
> -
> -static void hmm_free_notifier(struct mmu_notifier *mn)
> -{
> -	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
> -
> -	WARN_ON(!list_empty(&hmm->ranges));
> -	WARN_ON(!list_empty(&hmm->mirrors));
> -	kfree(hmm);
> -}
> -
> -static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
> -{
> -	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
> -	struct hmm_mirror *mirror;
> -
> -	/*
> -	 * Since hmm_range_register() holds the mmget() lock hmm_release() is
> -	 * prevented as long as a range exists.
> -	 */
> -	WARN_ON(!list_empty_careful(&hmm->ranges));
> -
> -	down_read(&hmm->mirrors_sem);
> -	list_for_each_entry(mirror, &hmm->mirrors, list) {
> -		/*
> -		 * Note: The driver is not allowed to trigger
> -		 * hmm_mirror_unregister() from this thread.
> -		 */
> -		if (mirror->ops->release)
> -			mirror->ops->release(mirror);
> -	}
> -	up_read(&hmm->mirrors_sem);
> -}
> -
> -static void notifiers_decrement(struct hmm *hmm)
> -{
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&hmm->ranges_lock, flags);
> -	hmm->notifiers--;
> -	if (!hmm->notifiers) {
> -		struct hmm_range *range;
> -
> -		list_for_each_entry(range, &hmm->ranges, list) {
> -			if (range->valid)
> -				continue;
> -			range->valid = true;
> -		}
> -		wake_up_all(&hmm->wq);
> -	}
> -	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
> -}
> -
> -static int hmm_invalidate_range_start(struct mmu_notifier *mn,
> -			const struct mmu_notifier_range *nrange)
> -{
> -	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
> -	struct hmm_mirror *mirror;
> -	struct hmm_range *range;
> -	unsigned long flags;
> -	int ret = 0;
> -
> -	spin_lock_irqsave(&hmm->ranges_lock, flags);
> -	hmm->notifiers++;
> -	list_for_each_entry(range, &hmm->ranges, list) {
> -		if (nrange->end < range->start || nrange->start >= range->end)
> -			continue;
> -
> -		range->valid = false;
> -	}
> -	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
> -
> -	if (mmu_notifier_range_blockable(nrange))
> -		down_read(&hmm->mirrors_sem);
> -	else if (!down_read_trylock(&hmm->mirrors_sem)) {
> -		ret = -EAGAIN;
> -		goto out;
> -	}
> -
> -	list_for_each_entry(mirror, &hmm->mirrors, list) {
> -		int rc;
> -
> -		rc = mirror->ops->sync_cpu_device_pagetables(mirror, nrange);
> -		if (rc) {
> -			if (WARN_ON(mmu_notifier_range_blockable(nrange) ||
> -			    rc != -EAGAIN))
> -				continue;
> -			ret = -EAGAIN;
> -			break;
> -		}
> -	}
> -	up_read(&hmm->mirrors_sem);
> -
> -out:
> -	if (ret)
> -		notifiers_decrement(hmm);
> -	return ret;
> -}
> -
> -static void hmm_invalidate_range_end(struct mmu_notifier *mn,
> -			const struct mmu_notifier_range *nrange)
> -{
> -	struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
> -
> -	notifiers_decrement(hmm);
> -}
> -
> -static const struct mmu_notifier_ops hmm_mmu_notifier_ops = {
> -	.release		= hmm_release,
> -	.invalidate_range_start	= hmm_invalidate_range_start,
> -	.invalidate_range_end	= hmm_invalidate_range_end,
> -	.alloc_notifier		= hmm_alloc_notifier,
> -	.free_notifier		= hmm_free_notifier,
> -};
> -
> -/*
> - * hmm_mirror_register() - register a mirror against an mm
> - *
> - * @mirror: new mirror struct to register
> - * @mm: mm to register against
> - * Return: 0 on success, -ENOMEM if no memory, -EINVAL if invalid arguments
> - *
> - * To start mirroring a process address space, the device driver must register
> - * an HMM mirror struct.
> - *
> - * The caller cannot unregister the hmm_mirror while any ranges are
> - * registered.
> - *
> - * Callers using this function must put a call to mmu_notifier_synchronize()
> - * in their module exit functions.
> - */
> -int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm)
> -{
> -	struct mmu_notifier *mn;
> -
> -	lockdep_assert_held_write(&mm->mmap_sem);
> -
> -	/* Sanity check */
> -	if (!mm || !mirror || !mirror->ops)
> -		return -EINVAL;
> -
> -	mn = mmu_notifier_get_locked(&hmm_mmu_notifier_ops, mm);
> -	if (IS_ERR(mn))
> -		return PTR_ERR(mn);
> -	mirror->hmm = container_of(mn, struct hmm, mmu_notifier);
> -
> -	down_write(&mirror->hmm->mirrors_sem);
> -	list_add(&mirror->list, &mirror->hmm->mirrors);
> -	up_write(&mirror->hmm->mirrors_sem);
> -
> -	return 0;
> -}
> -EXPORT_SYMBOL(hmm_mirror_register);
> -
> -/*
> - * hmm_mirror_unregister() - unregister a mirror
> - *
> - * @mirror: mirror struct to unregister
> - *
> - * Stop mirroring a process address space, and cleanup.
> - */
> -void hmm_mirror_unregister(struct hmm_mirror *mirror)
> -{
> -	struct hmm *hmm = mirror->hmm;
> -
> -	down_write(&hmm->mirrors_sem);
> -	list_del(&mirror->list);
> -	up_write(&hmm->mirrors_sem);
> -	mmu_notifier_put(&hmm->mmu_notifier);
> -}
> -EXPORT_SYMBOL(hmm_mirror_unregister);
> -
>  struct hmm_vma_walk {
>  	struct hmm_range	*range;
>  	struct dev_pagemap	*pgmap;
> @@ -779,87 +592,6 @@ static void hmm_pfns_clear(struct hmm_range *range,
>  		*pfns = range->values[HMM_PFN_NONE];
>  }
>  
> -/*
> - * hmm_range_register() - start tracking change to CPU page table over a range
> - * @range: range
> - * @mm: the mm struct for the range of virtual address
> - *
> - * Return: 0 on success, -EFAULT if the address space is no longer valid
> - *
> - * Track updates to the CPU page table see include/linux/hmm.h
> - */
> -int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror)
> -{
> -	struct hmm *hmm = mirror->hmm;
> -	unsigned long flags;
> -
> -	range->valid = false;
> -	range->hmm = NULL;
> -
> -	if ((range->start & (PAGE_SIZE - 1)) || (range->end & (PAGE_SIZE - 1)))
> -		return -EINVAL;
> -	if (range->start >= range->end)
> -		return -EINVAL;
> -
> -	/* Prevent hmm_release() from running while the range is valid */
> -	if (!mmget_not_zero(hmm->mmu_notifier.mm))
> -		return -EFAULT;
> -
> -	/* Initialize range to track CPU page table updates. */
> -	spin_lock_irqsave(&hmm->ranges_lock, flags);
> -
> -	range->hmm = hmm;
> -	list_add(&range->list, &hmm->ranges);
> -
> -	/*
> -	 * If there are any concurrent notifiers we have to wait for them for
> -	 * the range to be valid (see hmm_range_wait_until_valid()).
> -	 */
> -	if (!hmm->notifiers)
> -		range->valid = true;
> -	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
> -
> -	return 0;
> -}
> -EXPORT_SYMBOL(hmm_range_register);
> -
> -/*
> - * hmm_range_unregister() - stop tracking change to CPU page table over a range
> - * @range: range
> - *
> - * Range struct is used to track updates to the CPU page table after a call to
> - * hmm_range_register(). See include/linux/hmm.h for how to use it.
> - */
> -void hmm_range_unregister(struct hmm_range *range)
> -{
> -	struct hmm *hmm = range->hmm;
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&hmm->ranges_lock, flags);
> -	list_del_init(&range->list);
> -	spin_unlock_irqrestore(&hmm->ranges_lock, flags);
> -
> -	/* Drop reference taken by hmm_range_register() */
> -	mmput(hmm->mmu_notifier.mm);
> -
> -	/*
> -	 * The range is now invalid and the ref on the hmm is dropped, so
> -	 * poison the pointer.  Leave other fields in place, for the caller's
> -	 * use.
> -	 */
> -	range->valid = false;
> -	memset(&range->hmm, POISON_INUSE, sizeof(range->hmm));
> -}
> -EXPORT_SYMBOL(hmm_range_unregister);
> -
> -static bool needs_retry(struct hmm_range *range)
> -{
> -	if (range->notifier)
> -		return mmu_range_check_retry(range->notifier,
> -					     range->notifier_seq);
> -	return !range->valid;
> -}
> -
>  static const struct mm_walk_ops hmm_walk_ops = {
>  	.pud_entry	= hmm_vma_walk_pud,
>  	.pmd_entry	= hmm_vma_walk_pmd,
> @@ -900,20 +632,15 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
>  	const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP;
>  	unsigned long start = range->start, end;
>  	struct hmm_vma_walk hmm_vma_walk;
> -	struct mm_struct *mm;
> +	struct mm_struct *mm = range->notifier->mm;
>  	struct vm_area_struct *vma;
>  	int ret;
>  
> -	if (range->notifier)
> -		mm = range->notifier->mm;
> -	else
> -		mm = range->hmm->mmu_notifier.mm;
> -
>  	lockdep_assert_held(&mm->mmap_sem);
>  
>  	do {
>  		/* If range is no longer valid force retry. */
> -		if (needs_retry(range))
> +		if (mmu_range_check_retry(range->notifier, range->notifier_seq))
>  			return -EBUSY;
>  
>  		vma = find_vma(mm, start);
> @@ -946,7 +673,9 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
>  			start = hmm_vma_walk.last;
>  
>  			/* Keep trying while the range is valid. */
> -		} while (ret == -EBUSY && !needs_retry(range));
> +		} while (ret == -EBUSY &&
> +			 !mmu_range_check_retry(range->notifier,
> +						range->notifier_seq));
>  
>  		if (ret) {
>  			unsigned long i;
> @@ -1004,7 +733,8 @@ long hmm_range_dma_map(struct hmm_range *range, struct device *device,
>  			continue;
>  
>  		/* Check if range is being invalidated */
> -		if (needs_retry(range)) {
> +		if (mmu_range_check_retry(range->notifier,
> +					  range->notifier_seq)) {
>  			ret = -EBUSY;
>  			goto unmap;
>  		}
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
                   ` (16 preceding siblings ...)
  2019-10-21 15:55 ` Dennis Dalessandro
@ 2019-10-21 18:40 ` Jerome Glisse
  2019-10-21 19:06   ` Jason Gunthorpe
  17 siblings, 1 reply; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 18:40 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Jason Gunthorpe

On Tue, Oct 15, 2019 at 03:12:27PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> they only use invalidate_range_start/end and immediately check the
> invalidating range against some driver data structure to tell if the
> driver is interested. Half of them use an interval_tree, the others are
> simple linear search lists.
> 
> Of the ones I checked they largely seem to have various kinds of races,
> bugs and poor implementation. This is a result of the complexity in how
> the notifier interacts with get_user_pages(). It is extremely difficult to
> use it correctly.
> 
> Consolidate all of this code together into the core mmu_notifier and
> provide a locking scheme similar to hmm_mirror that allows the user to
> safely use get_user_pages() and reliably know if the page list still
> matches the mm.
> 
> This new arrangment plays nicely with the !blockable mode for
> OOM. Scanning the interval tree is done such that the intersection test
> will always succeed, and since there is no invalidate_range_end exposed to
> drivers the scheme safely allows multiple drivers to be subscribed.
> 
> Four places are converted as an example of how the new API is used.
> Four are left for future patches:
>  - i915_gem has complex locking around destruction of a registration,
>    needs more study
>  - hfi1 (2nd user) needs access to the rbtree
>  - scif_dma has a complicated logic flow
>  - vhost's mmu notifiers are already being rewritten
> 
> This is still being tested, but I figured to send it to start getting help
> from the xen, amd and hfi drivers which I cannot test here.

It might be a good oportunity to also switch those users to
hmm_range_fault() instead of GUP as GUP is pointless for those
users. In fact the GUP is an impediment to normal mm operations.

I will test on nouveau.

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-21 18:30   ` Jerome Glisse
@ 2019-10-21 18:54     ` Jason Gunthorpe
  2019-10-21 19:11       ` Jerome Glisse
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 18:54 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Michal Hocko

On Mon, Oct 21, 2019 at 02:30:56PM -0400, Jerome Glisse wrote:

> > +/**
> > + * mmu_range_read_retry - End a read side critical section against a VA range
> > + * mrn: The range under lock
> > + * seq: The return of the paired mmu_range_read_begin()
> > + *
> > + * This MUST be called under a user provided lock that is also held
> > + * unconditionally by op->invalidate(). That lock provides the required SMP
> > + * barrier for handling invalidate_seq.
> > + *
> > + * Each call should be paired with a single mmu_range_read_begin() and
> > + * should be used to conclude the read side.
> > + *
> > + * Returns true if an invalidation collided with this critical section, and
> > + * the caller should retry.
> > + */
> > +static inline bool mmu_range_read_retry(struct mmu_range_notifier *mrn,
> > +					unsigned long seq)
> > +{
> > +	return READ_ONCE(mrn->invalidate_seq) != seq;
> > +}
> 
> What about calling this mmu_range_read_end() instead ? To match
> with the mmu_range_read_begin().

_end make some sense too, but I picked _retry for symmetry with the
seqcount_* family of functions which used retry.

I think retry makes it clearer that it is expected to fail and retry
is required.

> > +	/*
> > +	 * The inv_end incorporates a deferred mechanism like rtnl. Adds and
> 
> The rtnl reference is lost on people unfamiliar with the network :)
> code maybe like rtnl_lock()/rtnl_unlock() so people have a chance to
> grep the right function. Assuming i am myself getting the right
> reference :)

Yep, you got it, I will update

> > +	/*
> > +	 * mrn->invalidate_seq is always set to an odd value. This ensures
> > +	 * that if seq does wrap we will always clear the below sleep in some
> > +	 * reasonable time as mmn_mm->invalidate_seq is even in the idle
> > +	 * state.
> 
> I think this comment should be with the struct mmu_range_notifier
> definition and you should just point to it from here as the same
> comment would be useful down below.

I had it here because it is critical to understanding the wait_event
and why it doesn't just block indefinitely, but yes this property
comes up below too which refers back here.

Fundamentally this wait event is why this approach to keep an odd
value in the mrn is used.

> > -int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
> > +static int mn_itree_invalidate(struct mmu_notifier_mm *mmn_mm,
> > +				     const struct mmu_notifier_range *range)
> > +{
> > +	struct mmu_range_notifier *mrn;
> > +	unsigned long cur_seq;
> > +
> > +	for (mrn = mn_itree_inv_start_range(mmn_mm, range, &cur_seq); mrn;
> > +	     mrn = mn_itree_inv_next(mrn, range)) {
> > +		bool ret;
> > +
> > +		WRITE_ONCE(mrn->invalidate_seq, cur_seq);
> > +		ret = mrn->ops->invalidate(mrn, range);
> > +		if (!ret && !WARN_ON(mmu_notifier_range_blockable(range)))
> 
> Isn't the logic wrong here ? We want to warn if the range
> was mark as blockable and invalidate returned false. Also
> we went to backoff no matter what if the invalidate return
> false ie:

If invalidate returned false and the caller is blockable then we do
not want to return, we must continue processing other ranges - to try
to cope with the defective driver.

Callers in blocking mode ignore the return value and go ahead to
invalidate..

Would it be clearer as 

if (!ret) {
   if (WARN_ON(mmu_notifier_range_blockable(range)))
       continue;
   goto out_would_block;
}

?

> > @@ -284,21 +589,22 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
> >  		 * the write side of the mmap_sem.
> >  		 */
> >  		mmu_notifier_mm =
> > -			kmalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
> > +			kzalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
> >  		if (!mmu_notifier_mm)
> >  			return -ENOMEM;
> >  
> >  		INIT_HLIST_HEAD(&mmu_notifier_mm->list);
> >  		spin_lock_init(&mmu_notifier_mm->lock);
> > +		mmu_notifier_mm->invalidate_seq = 2;
> 
> Why starting at 2 ?

Good question. If everything is coded properly the starting value
doesn't matter

I left it like this because it makes debugging a tiny bit simpler, ie
if you print the seq number then the first mmu_range_notififers will
get 1 as their intial seq (see __mmu_range_notifier_insert) instead of
ULONG_MAX

> > +		mmu_notifier_mm->itree = RB_ROOT_CACHED;
> > +		init_waitqueue_head(&mmu_notifier_mm->wq);
> > +		INIT_HLIST_HEAD(&mmu_notifier_mm->deferred_list);
> >  	}
> >  
> >  	ret = mm_take_all_locks(mm);
> >  	if (unlikely(ret))
> >  		goto out_clean;
> >  
> > -	/* Pairs with the mmdrop in mmu_notifier_unregister_* */
> > -	mmgrab(mm);
> > -
> >  	/*
> >  	 * Serialize the update against mmu_notifier_unregister. A
> >  	 * side note: mmu_notifier_release can't run concurrently with
> > @@ -306,13 +612,26 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
> >  	 * current->mm or explicitly with get_task_mm() or similar).
> >  	 * We can't race against any other mmu notifier method either
> >  	 * thanks to mm_take_all_locks().
> > +	 *
> > +	 * release semantics are provided for users not inside a lock covered
> > +	 * by mm_take_all_locks(). acquire can only be used while holding the
> > +	 * mmgrab or mmget, and is safe because once created the
> > +	 * mmu_notififer_mm is not freed until the mm is destroyed.
> >  	 */
> >  	if (mmu_notifier_mm)
> > -		mm->mmu_notifier_mm = mmu_notifier_mm;
> > +		smp_store_release(&mm->mmu_notifier_mm, mmu_notifier_mm);
> 
> I do not understand why you need the release semantics here, we
> are under the mmap_sem in write mode when we release it the lock
> barrier will make sure anyone else sees the new mmu_notifier_mm

It pairs with the smp_load_acquire() in mmu_range_notifier_insert()
which is not called with the mmap_sem held. 

Since that reader is not locked we need release semantics here to
ensure the unlocked reader sees a fully initinalized mmu_notifier_mm
structure when it observes the pointer.

> > +/**
> > + * mmu_range_notifier_insert - Insert a range notifier
> > + * @mrn: Range notifier to register
> > + * @start: Starting virtual address to monitor
> > + * @length: Length of the range to monitor
> > + * @mm : mm_struct to attach to
> > + *
> > + * This function subscribes the range notifier for notifications from the mm.
> > + * Upon return the ops related to mmu_range_notifier will be called whenever
> > + * an event that intersects with the given range occurs.
> > + *
> > + * Upon return the range_notifier may not be present in the interval tree yet.
> > + * The caller must use the normal range notifier locking flow via
> > + * mmu_range_read_begin() to establish SPTEs for this range.
> > + */
> > +int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
> > +			      unsigned long start, unsigned long length,
> > +			      struct mm_struct *mm)
> > +{
> > +	struct mmu_notifier_mm *mmn_mm;
> > +	int ret;
> > +
> > +	might_lock(&mm->mmap_sem);
> > +
> > +	mmn_mm = smp_load_acquire(&mm->mmu_notifier_mm);

Right here we don't have the mmap_sem so this load is unlocked.

If we observe !mmn_mm we must also observe all stores done to set it
up. Ie we have to observe the spin_lock_init, RB_ROOT_CACHED/etc

> > +	if (!mmn_mm || !mmn_mm->has_interval) {
> > +		ret = mmu_notifier_register(NULL, mm);
> > +		if (ret)
> > +			return ret;
> > +		mmn_mm = mm->mmu_notifier_mm;
> > +	}
> > +	return __mmu_range_notifier_insert(mrn, start, length, mmn_mm, mm);
> > +}
> > +EXPORT_SYMBOL_GPL(mmu_range_notifier_insert);
> > +
> > +int mmu_range_notifier_insert_locked(struct mmu_range_notifier *mrn,
> > +				     unsigned long start, unsigned long length,
> > +				     struct mm_struct *mm)
> > +{
> > +	struct mmu_notifier_mm *mmn_mm;
> > +	int ret;
> > +
> > +	lockdep_assert_held_write(&mm->mmap_sem);
> > +
> > +	mmn_mm = mm->mmu_notifier_mm;
> 
> Shouldn't you be using smp_load_acquire() ?

This function is called while holding the mmap_sem. As you noted above
all writers to mm->mmu_notifier_mm hold the write side of mmap_sem,
thus here the read side is fully locked and doesn't need the acquire.

Note the lockdep annotations marking the expected locking enviroment
for the two functions.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related
  2019-10-21 18:38   ` Jerome Glisse
@ 2019-10-21 18:57     ` Jason Gunthorpe
  2019-10-21 19:19       ` Jerome Glisse
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 18:57 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs

On Mon, Oct 21, 2019 at 02:38:24PM -0400, Jerome Glisse wrote:
> On Tue, Oct 15, 2019 at 03:12:42PM -0300, Jason Gunthorpe wrote:
> > From: Jason Gunthorpe <jgg@mellanox.com>
> > 
> > The only two users of this are now converted to use mmu_range_notifier,
> > delete all the code and update hmm.rst.
> 
> I guess i should point out that the reasons for hmm_mirror and hmm
> was for:
>     1) Maybe define a common API for userspace to provide memory
>        placement hints (NUMA for GPU)

Do you think this needs special code in the notifiers?

>     2) multi-devices sharing same mirror page table

Oh neat, but I think this just means the GPU driver has to register a
single notifier for multiple GPUs??

> But support for multi-GPU in nouveau is way behind and i guess such
> optimization will have to re-materialize what is necessary once that
> happens.

Sure, it will be easier to understand what is needed with a bit of
code!

> Note this patch should also update kernel/fork.c and the mm_struct
> definition AFAICT. With those changes you can add my:

Can you please elaborate what updates you mean? I'm not sure. 

Maybe I already got the things you are thinking of with the get/put
changes?

Thanks,
Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 18:40 ` Jerome Glisse
@ 2019-10-21 19:06   ` Jason Gunthorpe
  2019-10-23 20:26     ` Jerome Glisse
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 19:06 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs

On Mon, Oct 21, 2019 at 02:40:41PM -0400, Jerome Glisse wrote:
> On Tue, Oct 15, 2019 at 03:12:27PM -0300, Jason Gunthorpe wrote:
> > From: Jason Gunthorpe <jgg@mellanox.com>
> > 
> > 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> > scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> > they only use invalidate_range_start/end and immediately check the
> > invalidating range against some driver data structure to tell if the
> > driver is interested. Half of them use an interval_tree, the others are
> > simple linear search lists.
> > 
> > Of the ones I checked they largely seem to have various kinds of races,
> > bugs and poor implementation. This is a result of the complexity in how
> > the notifier interacts with get_user_pages(). It is extremely difficult to
> > use it correctly.
> > 
> > Consolidate all of this code together into the core mmu_notifier and
> > provide a locking scheme similar to hmm_mirror that allows the user to
> > safely use get_user_pages() and reliably know if the page list still
> > matches the mm.
> > 
> > This new arrangment plays nicely with the !blockable mode for
> > OOM. Scanning the interval tree is done such that the intersection test
> > will always succeed, and since there is no invalidate_range_end exposed to
> > drivers the scheme safely allows multiple drivers to be subscribed.
> > 
> > Four places are converted as an example of how the new API is used.
> > Four are left for future patches:
> >  - i915_gem has complex locking around destruction of a registration,
> >    needs more study
> >  - hfi1 (2nd user) needs access to the rbtree
> >  - scif_dma has a complicated logic flow
> >  - vhost's mmu notifiers are already being rewritten
> > 
> > This is still being tested, but I figured to send it to start getting help
> > from the xen, amd and hfi drivers which I cannot test here.
> 
> It might be a good oportunity to also switch those users to
> hmm_range_fault() instead of GUP as GUP is pointless for those
> users. In fact the GUP is an impediment to normal mm operations.

I think vhost can use hmm_range_fault

hfi1 does actually need to have the page pin, it doesn't fence DMA
during invalidate.

i915_gem feels alot like amdgpu, so probably it would benefit

No idea about scif_dma

> I will test on nouveau.

Thanks, hopefully it still works, I think Ralph was able to do some
basic checks. But it is a pretty complicated series, I probably made
some mistakes.

FWIW, I know that nouveau gets a lockdep splat now from Daniel
Vetter's recent changes, it tries to do GFP_KERENEL allocations under
a lock also held by the invalidate_range_start path.

Thanks for looking at it!

Regards,
Jason 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-21 18:54     ` Jason Gunthorpe
@ 2019-10-21 19:11       ` Jerome Glisse
  2019-10-21 19:24         ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 19:11 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Michal Hocko

On Mon, Oct 21, 2019 at 06:54:25PM +0000, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:30:56PM -0400, Jerome Glisse wrote:
> 
> > > +/**
> > > + * mmu_range_read_retry - End a read side critical section against a VA range
> > > + * mrn: The range under lock
> > > + * seq: The return of the paired mmu_range_read_begin()
> > > + *
> > > + * This MUST be called under a user provided lock that is also held
> > > + * unconditionally by op->invalidate(). That lock provides the required SMP
> > > + * barrier for handling invalidate_seq.
> > > + *
> > > + * Each call should be paired with a single mmu_range_read_begin() and
> > > + * should be used to conclude the read side.
> > > + *
> > > + * Returns true if an invalidation collided with this critical section, and
> > > + * the caller should retry.
> > > + */
> > > +static inline bool mmu_range_read_retry(struct mmu_range_notifier *mrn,
> > > +					unsigned long seq)
> > > +{
> > > +	return READ_ONCE(mrn->invalidate_seq) != seq;
> > > +}
> > 
> > What about calling this mmu_range_read_end() instead ? To match
> > with the mmu_range_read_begin().
> 
> _end make some sense too, but I picked _retry for symmetry with the
> seqcount_* family of functions which used retry.
> 
> I think retry makes it clearer that it is expected to fail and retry
> is required.

Fair enough.

> 
> > > +	/*
> > > +	 * The inv_end incorporates a deferred mechanism like rtnl. Adds and
> > 
> > The rtnl reference is lost on people unfamiliar with the network :)
> > code maybe like rtnl_lock()/rtnl_unlock() so people have a chance to
> > grep the right function. Assuming i am myself getting the right
> > reference :)
> 
> Yep, you got it, I will update
> 
> > > +	/*
> > > +	 * mrn->invalidate_seq is always set to an odd value. This ensures
> > > +	 * that if seq does wrap we will always clear the below sleep in some
> > > +	 * reasonable time as mmn_mm->invalidate_seq is even in the idle
> > > +	 * state.
> > 
> > I think this comment should be with the struct mmu_range_notifier
> > definition and you should just point to it from here as the same
> > comment would be useful down below.
> 
> I had it here because it is critical to understanding the wait_event
> and why it doesn't just block indefinitely, but yes this property
> comes up below too which refers back here.
> 
> Fundamentally this wait event is why this approach to keep an odd
> value in the mrn is used.

The comment is fine, it is just i read the patch out of order and
in insert function i was pondering on why it must be odd while the
explanation was here. It is more a taste thing, i prefer comments
about this to be part of the struct definition comments so that
multiple place can refer to the same struct definition it is more
resiliant to code change as struct definition is always easy to
find and thus reference to it can be sprinkle all over where it is
necessary.


> 
> > > -int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
> > > +static int mn_itree_invalidate(struct mmu_notifier_mm *mmn_mm,
> > > +				     const struct mmu_notifier_range *range)
> > > +{
> > > +	struct mmu_range_notifier *mrn;
> > > +	unsigned long cur_seq;
> > > +
> > > +	for (mrn = mn_itree_inv_start_range(mmn_mm, range, &cur_seq); mrn;
> > > +	     mrn = mn_itree_inv_next(mrn, range)) {
> > > +		bool ret;
> > > +
> > > +		WRITE_ONCE(mrn->invalidate_seq, cur_seq);
> > > +		ret = mrn->ops->invalidate(mrn, range);
> > > +		if (!ret && !WARN_ON(mmu_notifier_range_blockable(range)))
> > 
> > Isn't the logic wrong here ? We want to warn if the range
> > was mark as blockable and invalidate returned false. Also
> > we went to backoff no matter what if the invalidate return
> > false ie:
> 
> If invalidate returned false and the caller is blockable then we do
> not want to return, we must continue processing other ranges - to try
> to cope with the defective driver.
> 
> Callers in blocking mode ignore the return value and go ahead to
> invalidate..
> 
> Would it be clearer as 
> 
> if (!ret) {
>    if (WARN_ON(mmu_notifier_range_blockable(range)))
>        continue;
>    goto out_would_block;
> }
> 
> ?

Yes look clearer to me at least.

> 
> > > @@ -284,21 +589,22 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
> > >  		 * the write side of the mmap_sem.
> > >  		 */
> > >  		mmu_notifier_mm =
> > > -			kmalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
> > > +			kzalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL);
> > >  		if (!mmu_notifier_mm)
> > >  			return -ENOMEM;
> > >  
> > >  		INIT_HLIST_HEAD(&mmu_notifier_mm->list);
> > >  		spin_lock_init(&mmu_notifier_mm->lock);
> > > +		mmu_notifier_mm->invalidate_seq = 2;
> > 
> > Why starting at 2 ?
> 
> Good question. If everything is coded properly the starting value
> doesn't matter
> 
> I left it like this because it makes debugging a tiny bit simpler, ie
> if you print the seq number then the first mmu_range_notififers will
> get 1 as their intial seq (see __mmu_range_notifier_insert) instead of
> ULONG_MAX

Yeah make sense.

> 
> > > +		mmu_notifier_mm->itree = RB_ROOT_CACHED;
> > > +		init_waitqueue_head(&mmu_notifier_mm->wq);
> > > +		INIT_HLIST_HEAD(&mmu_notifier_mm->deferred_list);
> > >  	}
> > >  
> > >  	ret = mm_take_all_locks(mm);
> > >  	if (unlikely(ret))
> > >  		goto out_clean;
> > >  
> > > -	/* Pairs with the mmdrop in mmu_notifier_unregister_* */
> > > -	mmgrab(mm);
> > > -
> > >  	/*
> > >  	 * Serialize the update against mmu_notifier_unregister. A
> > >  	 * side note: mmu_notifier_release can't run concurrently with
> > > @@ -306,13 +612,26 @@ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
> > >  	 * current->mm or explicitly with get_task_mm() or similar).
> > >  	 * We can't race against any other mmu notifier method either
> > >  	 * thanks to mm_take_all_locks().
> > > +	 *
> > > +	 * release semantics are provided for users not inside a lock covered
> > > +	 * by mm_take_all_locks(). acquire can only be used while holding the
> > > +	 * mmgrab or mmget, and is safe because once created the
> > > +	 * mmu_notififer_mm is not freed until the mm is destroyed.
> > >  	 */
> > >  	if (mmu_notifier_mm)
> > > -		mm->mmu_notifier_mm = mmu_notifier_mm;
> > > +		smp_store_release(&mm->mmu_notifier_mm, mmu_notifier_mm);
> > 
> > I do not understand why you need the release semantics here, we
> > are under the mmap_sem in write mode when we release it the lock
> > barrier will make sure anyone else sees the new mmu_notifier_mm
> 
> It pairs with the smp_load_acquire() in mmu_range_notifier_insert()
> which is not called with the mmap_sem held. 
> 
> Since that reader is not locked we need release semantics here to
> ensure the unlocked reader sees a fully initinalized mmu_notifier_mm
> structure when it observes the pointer.

I thought the mm_take_all_locks() would have had a barrier and thus
that you could not see mmu_notifier struct partialy initialized. But
having the acquire/release as safety net does not hurt. Maybe add a
comment about the struct initialization needing to be visible before
pointer is set.

> 
> > > +/**
> > > + * mmu_range_notifier_insert - Insert a range notifier
> > > + * @mrn: Range notifier to register
> > > + * @start: Starting virtual address to monitor
> > > + * @length: Length of the range to monitor
> > > + * @mm : mm_struct to attach to
> > > + *
> > > + * This function subscribes the range notifier for notifications from the mm.
> > > + * Upon return the ops related to mmu_range_notifier will be called whenever
> > > + * an event that intersects with the given range occurs.
> > > + *
> > > + * Upon return the range_notifier may not be present in the interval tree yet.
> > > + * The caller must use the normal range notifier locking flow via
> > > + * mmu_range_read_begin() to establish SPTEs for this range.
> > > + */
> > > +int mmu_range_notifier_insert(struct mmu_range_notifier *mrn,
> > > +			      unsigned long start, unsigned long length,
> > > +			      struct mm_struct *mm)
> > > +{
> > > +	struct mmu_notifier_mm *mmn_mm;
> > > +	int ret;
> > > +
> > > +	might_lock(&mm->mmap_sem);
> > > +
> > > +	mmn_mm = smp_load_acquire(&mm->mmu_notifier_mm);
> 
> Right here we don't have the mmap_sem so this load is unlocked.
> 
> If we observe !mmn_mm we must also observe all stores done to set it
> up. Ie we have to observe the spin_lock_init, RB_ROOT_CACHED/etc
> 
> > > +	if (!mmn_mm || !mmn_mm->has_interval) {
> > > +		ret = mmu_notifier_register(NULL, mm);
> > > +		if (ret)
> > > +			return ret;
> > > +		mmn_mm = mm->mmu_notifier_mm;
> > > +	}
> > > +	return __mmu_range_notifier_insert(mrn, start, length, mmn_mm, mm);
> > > +}
> > > +EXPORT_SYMBOL_GPL(mmu_range_notifier_insert);
> > > +
> > > +int mmu_range_notifier_insert_locked(struct mmu_range_notifier *mrn,
> > > +				     unsigned long start, unsigned long length,
> > > +				     struct mm_struct *mm)
> > > +{
> > > +	struct mmu_notifier_mm *mmn_mm;
> > > +	int ret;
> > > +
> > > +	lockdep_assert_held_write(&mm->mmap_sem);
> > > +
> > > +	mmn_mm = mm->mmu_notifier_mm;
> > 
> > Shouldn't you be using smp_load_acquire() ?
> 
> This function is called while holding the mmap_sem. As you noted above
> all writers to mm->mmu_notifier_mm hold the write side of mmap_sem,
> thus here the read side is fully locked and doesn't need the acquire.
> 
> Note the lockdep annotations marking the expected locking enviroment
> for the two functions.

Yes i thought you had the acquire/release for some other reason
than struct init. It is fine here to not use the load_acquire.

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER
  2019-10-16  6:35     ` Oleksandr Andrushchenko
@ 2019-10-21 19:12       ` Jason Gunthorpe
  2019-10-28  6:25         ` [Xen-devel] " Oleksandr Andrushchenko
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 19:12 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Jürgen Groß,
	Felix.Kuehling, John Hubbard, Ralph Campbell, Jerome Glisse,
	Stefano Stabellini, linux-mm, amd-gfx, dri-devel, xen-devel,
	Boris Ostrovsky, Andrea Arcangeli, Ben Skeggs, linux-rdma

On Wed, Oct 16, 2019 at 06:35:15AM +0000, Oleksandr Andrushchenko wrote:
> On 10/16/19 8:11 AM, Jürgen Groß wrote:
> > On 15.10.19 20:12, Jason Gunthorpe wrote:
> >> From: Jason Gunthorpe <jgg@mellanox.com>
> >>
> >> DMA_SHARED_BUFFER can not be enabled by the user (it represents a 
> >> library
> >> set in the kernel). The kconfig convention is to use select for such
> >> symbols so they are turned on implicitly when the user enables a kconfig
> >> that needs them.
> >>
> >> Otherwise the XEN_GNTDEV_DMABUF kconfig is overly difficult to enable.
> >>
> >> Fixes: 932d6562179e ("xen/gntdev: Add initial support for dma-buf UAPI")
> >> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> Cc: xen-devel@lists.xenproject.org
> >> Cc: Juergen Gross <jgross@suse.com>
> >> Cc: Stefano Stabellini <sstabellini@kernel.org>
> >> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
> >
> > Reviewed-by: Juergen Gross <jgross@suse.com>
> >
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Thanks Oleksandr and Juergen, can you also give me some advice on how
to progress the more complex patch:

https://patchwork.kernel.org/patch/11191369/

Is this gntdev stuff still in-use? I struggled a bit to understand
what it is doing, but I think I made a reasonable guess?

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related
  2019-10-21 18:57     ` Jason Gunthorpe
@ 2019-10-21 19:19       ` Jerome Glisse
  0 siblings, 0 replies; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 19:19 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs

On Mon, Oct 21, 2019 at 06:57:42PM +0000, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:38:24PM -0400, Jerome Glisse wrote:
> > On Tue, Oct 15, 2019 at 03:12:42PM -0300, Jason Gunthorpe wrote:
> > > From: Jason Gunthorpe <jgg@mellanox.com>
> > > 
> > > The only two users of this are now converted to use mmu_range_notifier,
> > > delete all the code and update hmm.rst.
> > 
> > I guess i should point out that the reasons for hmm_mirror and hmm
> > was for:
> >     1) Maybe define a common API for userspace to provide memory
> >        placement hints (NUMA for GPU)
> 
> Do you think this needs special code in the notifiers?

Just need a place where to hang userspace policy hint the hmm_range
was the prime suspect. I need to revisit this once the nouveau user
space is in better shape.

> 
> >     2) multi-devices sharing same mirror page table
> 
> Oh neat, but I think this just means the GPU driver has to register a
> single notifier for multiple GPUs??

Yes that was the idea a single notifier with share page table, but
at this time this is non existent code so no need to hinder change
just for the sake of it.

> 
> > But support for multi-GPU in nouveau is way behind and i guess such
> > optimization will have to re-materialize what is necessary once that
> > happens.
> 
> Sure, it will be easier to understand what is needed with a bit of
> code!
> 
> > Note this patch should also update kernel/fork.c and the mm_struct
> > definition AFAICT. With those changes you can add my:
> 
> Can you please elaborate what updates you mean? I'm not sure. 
> 
> Maybe I already got the things you are thinking of with the get/put
> changes?

Oh i forgot this was already taken care of by this. So yes all is
fine:

Reviewed-by: Jérôme Glisse <jglisse@redhat.com>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-21 19:11       ` Jerome Glisse
@ 2019-10-21 19:24         ` Jason Gunthorpe
  2019-10-21 19:47           ` Jerome Glisse
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-21 19:24 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Michal Hocko

On Mon, Oct 21, 2019 at 03:11:57PM -0400, Jerome Glisse wrote:
> > Since that reader is not locked we need release semantics here to
> > ensure the unlocked reader sees a fully initinalized mmu_notifier_mm
> > structure when it observes the pointer.
> 
> I thought the mm_take_all_locks() would have had a barrier and thus
> that you could not see mmu_notifier struct partialy initialized. 

Not sure, usually a lock acquire doesn't have a store barrier?

Even if it did, we would still need some pairing read barrier..

> having the acquire/release as safety net does not hurt. Maybe add a
> comment about the struct initialization needing to be visible before
> pointer is set.

Is this clear?

         * release semantics on the initialization of the
         * mmu_notifier_mm's contents are provided for unlocked readers.
	 * acquire can only be used while holding the
         * mmgrab or mmget, and is safe because once created the
         * mmu_notififer_mm is not freed until the mm is destroyed.
         * Users holding the mmap_sem or one of the
	 * mm_take_all_locks() do not need to use acquire semantics.

It also helps explain why there is no locking around the other
readers, which has puzzled me in the past at least.

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-21 19:24         ` Jason Gunthorpe
@ 2019-10-21 19:47           ` Jerome Glisse
  0 siblings, 0 replies; 58+ messages in thread
From: Jerome Glisse @ 2019-10-21 19:47 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs,
	Michal Hocko

On Mon, Oct 21, 2019 at 07:24:53PM +0000, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 03:11:57PM -0400, Jerome Glisse wrote:
> > > Since that reader is not locked we need release semantics here to
> > > ensure the unlocked reader sees a fully initinalized mmu_notifier_mm
> > > structure when it observes the pointer.
> > 
> > I thought the mm_take_all_locks() would have had a barrier and thus
> > that you could not see mmu_notifier struct partialy initialized. 
> 
> Not sure, usually a lock acquire doesn't have a store barrier?

Yeah likely.

> 
> Even if it did, we would still need some pairing read barrier..
> 
> > having the acquire/release as safety net does not hurt. Maybe add a
> > comment about the struct initialization needing to be visible before
> > pointer is set.
> 
> Is this clear?
> 
>          * release semantics on the initialization of the
>          * mmu_notifier_mm's contents are provided for unlocked readers.
> 	 * acquire can only be used while holding the
>          * mmgrab or mmget, and is safe because once created the
>          * mmu_notififer_mm is not freed until the mm is destroyed.
>          * Users holding the mmap_sem or one of the
> 	 * mm_take_all_locks() do not need to use acquire semantics.
> 
> It also helps explain why there is no locking around the other
> readers, which has puzzled me in the past at least.

Perfect.

Jérôme


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 15:12                   ` Jason Gunthorpe
@ 2019-10-22  7:57                     ` Daniel Vetter
  2019-10-22 15:01                       ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Daniel Vetter @ 2019-10-22  7:57 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Koenig, Christian, Andrea Arcangeli, Yang, Philip,
	Ralph Campbell, linux-rdma, John Hubbard, Kuehling, Felix,
	amd-gfx, linux-mm, Jerome Glisse, dri-devel, Ben Skeggs

On Mon, Oct 21, 2019 at 03:12:26PM +0000, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:28:46PM +0000, Koenig, Christian wrote:
> > Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> > > On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote:
> > >> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> > >>> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
> > >>> [SNIP]
> > >>>    
> > >>>> So again how are they serialized?
> > >>> The 'driver lock' thing does it, read the hmm documentation, the hmm
> > >>> approach is basically the only approach that was correct of all the
> > >>> drivers..
> > >> Well that's what I've did, but what HMM does still doesn't looks correct
> > >> to me.
> > > It has a bug, but the basic flow seems to work.
> > >
> > > https://patchwork.kernel.org/patch/11191
> > 
> > Maybe wrong link? That link looks like an unrelated discussion on kernel 
> > image relocation.
> 
> Sorry, it got corrupted:
> 
> https://patchwork.kernel.org/patch/11191397/
> 
> > >>> So long as the 'driver lock' is held the range cannot become
> > >>> invalidated as the 'driver lock' prevents progress of invalidation.
> > >> Correct, but the problem is it doesn't wait for ongoing operations to
> > >> complete.
> > >>
> > >> See I'm talking about the following case:
> > >>
> > >> Thread A    Thread B
> > >> invalidate_range_start()
> > >>                       mmu_range_read_begin()
> > >>                       get_user_pages()/hmm_range_fault()
> > >>                       grab_driver_lock()
> > >> Updating the ptes
> > >> invalidate_range_end()
> > >>
> > >> As far as I can see in invalidate_range_start() the driver lock is taken
> > >> to make sure that we can't start any invalidation while the driver is
> > >> using the pages for a command submission.
> > > Again, this uses the seqlock like scheme *and* the driver lock.
> > >
> > > In this case after grab_driver_lock() mmu_range_read_retry() will
> > > return false if Thread A has progressed to 'updating the ptes.
> > >
> > > For instance here is how the concurrency resolves for retry:
> > >
> > >         CPU1                                CPU2
> > >                                    seq = mmu_range_read_begin()
> > > invalidate_range_start()
> > >    invalidate_seq++
> > 
> > How that was order was what confusing me. But I've read up on the code 
> > in mmu_range_read_begin() and found the lines I was looking for:
> > 
> > +    if (is_invalidating)
> > +        wait_event(mmn_mm->wq,
> > +               READ_ONCE(mmn_mm->invalidate_seq) != seq);
> > 
> > [SNIP]
> 
> Right, the basic design is that the 'seq' returned by
> mmu_range_read_begin() is never currently under invalidation.
> 
> Thus if the starting seq is not invalidating, then if it doesn't
> change we are guaranteed the ptes haven't changed either.
> 
> > > For the above I've simplified the mechanics of the invalidate_seq, you
> > > need to look through the patch to see how it actually works.
> > 
> > Yea, that you also allow multiple write sides is pretty neat.
> 
> Complicated, but necessary to make the non-blocking OOM stuff able to
> read the interval tree under all conditions :\
> 
> > > One of the motivations for this work is to squash that bug by adding a
> > > seqlock like pattern. But the basic hmm flow and collision-retry
> > > approach seems sound.
> > >
> > > Do you see a problem with this patch?
> > 
> > No, not any more.
> 
> Okay, great, thanks
>  
> > Essentially you are doing the same thing I've tried to do with the 
> > original amdgpu implementation. The difference is that you don't try to 
> > use a per range sequence (which is a good idea, we never got that fully 
> > working) and you allow multiple writers at the same time.
> 
> Yes, ODP had the per-range sequence and it never worked right
> either. Keeping track of things during the invalidate_end was too complex
>  
> > Feel free to stitch an Acked-by: Christian König 
> > <christian.koenig@amd.com> on patch #2,
> 
> Thanks! Can you also take some review and test for the AMD related
> patches? These were quite hard to make, it is very likely I've made an
> error..
> 
> > but you still doing a bunch of things in there which are way beyond
> > my understanding (e.g. where are all the SMP barriers?).
> 
> The only non-locked data is 'struct mmu_range_notifier->invalidate_seq'
> 
> On the write side it uses a WRITE_ONCE. The 'seq' it writes is
> generated under the mmn_mm->lock spinlock (held before and after the
> WRITE_ONCE) such that all concurrent WRITE_ONCE's are storing the same
> value. 
> 
> Essentially the spinlock is providing the barrier to order write:
> 
> invalidate_range_start():
>  spin_lock(&mmn_mm->lock);
>  mmn_mm->active_invalidate_ranges++;
>  mmn_mm->invalidate_seq |= 1;
>  *seq = mmn_mm->invalidate_seq;
>  spin_unlock(&mmn_mm->lock);
> 
>  WRITE_ONCE(mrn->invalidate_seq, cur_seq);
> 
> invalidate_range_end()
>  spin_lock(&mmn_mm->lock);
>  if (--mmn_mm->active_invalidate_ranges)
>     mmn_mm->invalidate_seq++
>  spin_unlock(&mmn_mm->lock);
> 
> ie when we do invalidate_seq++, due to the active_invalidate_ranges
> counter and the spinlock, we know all other WRITE_ONCE's have passed
> their spin_unlock and no concurrent ones are starting. The spinlock is
> providing the barrier here.
> 
> On the read side.. It is a similar argument, except with the
> driver_lock:
> 
> mmu_range_read_begin()
>   seq = READ_ONCE(mrn->invalidate_seq);
> 
> Here 'seq' may be the current value, or it may be an older
> value. Ordering is eventually provided by the driver_lock:
> 
> mn_tree_invalidate_start()
>  [..]
>  WRITE_ONCE(mrn->invalidate_seq, cur_seq);
>  driver_lock()
>  driver_unlock()
> 
> vs
>  driver_lock()
>    mmu_range_read_begin()
>      return seq != READ_ONCE(mrn->invalidate_seq);
>  driver_unlock()
> 
> Here if mn_tree_invalidate_start() has passed driver_unlock() then
> because we do driver_lock() before mmu_range_read_begin() we must
> observe the WRITE_ONCE. ie the driver_unlock()/driver_lock() provide
> the pair'd barrier.
> 
> If mn_tree_invalidate_start() has not passed driver_lock() then the
> mmu_range_read_begin() side prevents it from passing driver_lock()
> while it holds the lock. In this case it is OK if we don't observe the
> WRITE_ONCE() that was done just before as the invalidate_start()
> thread can't proceed to invalidation.
> 
> It is very unusual locking, it would be great if others could help
> look at it!
> 
> The unusual bit in all of this is using a lock's critical region to
> 'protect' data for read, but updating that same data before the lock's
> critical secion. ie relying on the unlock barrier to 'release' program
> ordered stores done before the lock's own critical region, and the
> lock side barrier to 'acquire' those stores.

I think this unusual use of locks as barriers for other unlocked accesses
deserves comments even more than just normal barriers. Can you pls add
them? I think the design seeems sound ...

Also the comment on the driver's lock hopefully prevents driver
maintainers from moving the driver_lock around in a way that would very
subtle break the scheme, so I think having the acquire barrier commented
in each place would be really good.

Cheers, Daniel

> 
> This approach is borrowed from the hmm mirror implementation..
> 
> If for some reason the scheme doesn't work, then the fallback is to
> expand the mmn_mm->lock spinlock to protect the mrn->invalidate_seq at
> some cost in performance.
> 
> Jason
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 16:58   ` Jason Gunthorpe
@ 2019-10-22 11:56     ` Dennis Dalessandro
  2019-10-22 14:37       ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Dennis Dalessandro @ 2019-10-22 11:56 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling,
	linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs

On 10/21/2019 12:58 PM, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 11:55:51AM -0400, Dennis Dalessandro wrote:
>> On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
>>> This is still being tested, but I figured to send it to start getting help
>>> from the xen, amd and hfi drivers which I cannot test here.
>>
>> Sorry for the delay, I never seen this. Was not on Cc list and didn't
>> register to me it impacted hfi. I'll take a look and run it through some
>> hfi1 tests.
> 
> Hm, you were cc'd on the hfi1 patch of the series:
> 
> https://patchwork.kernel.org/patch/11191395/
> 
> So you saw that, right?

I do now.

> But it seems that git send-email didn't pull all the cc's together?

I don't know. I thought it did, at one time I recall trying to get it 
*not* to do that, when preparing some internal reviews. Haven't used it 
for a long time though, I've been using stgit.

At any rate can you give me a SHA or branch this applies on top of? I 
have pulled rdma/hmm, rdma/wip/jgg, linus/master but seem to have conflicts.

-Denny




^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-22 11:56     ` Dennis Dalessandro
@ 2019-10-22 14:37       ` Jason Gunthorpe
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-22 14:37 UTC (permalink / raw)
  To: Dennis Dalessandro
  Cc: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling,
	linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs

On Tue, Oct 22, 2019 at 07:56:12AM -0400, Dennis Dalessandro wrote:
> On 10/21/2019 12:58 PM, Jason Gunthorpe wrote:
> > On Mon, Oct 21, 2019 at 11:55:51AM -0400, Dennis Dalessandro wrote:
> > > On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
> > > > This is still being tested, but I figured to send it to start getting help
> > > > from the xen, amd and hfi drivers which I cannot test here.
> > > 
> > > Sorry for the delay, I never seen this. Was not on Cc list and didn't
> > > register to me it impacted hfi. I'll take a look and run it through some
> > > hfi1 tests.
> > 
> > Hm, you were cc'd on the hfi1 patch of the series:
> > 
> > https://patchwork.kernel.org/patch/11191395/
> > 
> > So you saw that, right?
> 
> I do now.
> 
> > But it seems that git send-email didn't pull all the cc's together?
> 
> I don't know. I thought it did, at one time I recall trying to get it *not*
> to do that, when preparing some internal reviews. Haven't used it for a long
> time though, I've been using stgit.
> 
> At any rate can you give me a SHA or branch this applies on top of? I have
> pulled rdma/hmm, rdma/wip/jgg, linus/master but seem to have conflicts.

Here is a full tree:

https://github.com/jgunthorpe/linux/commits/mmu_notifier

It needs the ODP series on the mailing list to apply

Thanks,
Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-22  7:57                     ` Daniel Vetter
@ 2019-10-22 15:01                       ` Jason Gunthorpe
  2019-10-23  9:08                         ` Daniel Vetter
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-22 15:01 UTC (permalink / raw)
  To: Koenig, Christian, Andrea Arcangeli, Yang, Philip,
	Ralph Campbell, linux-rdma, John Hubbard, Kuehling, Felix,
	amd-gfx, linux-mm, Jerome Glisse, dri-devel, Ben Skeggs

On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:

> > The unusual bit in all of this is using a lock's critical region to
> > 'protect' data for read, but updating that same data before the lock's
> > critical secion. ie relying on the unlock barrier to 'release' program
> > ordered stores done before the lock's own critical region, and the
> > lock side barrier to 'acquire' those stores.
> 
> I think this unusual use of locks as barriers for other unlocked accesses
> deserves comments even more than just normal barriers. Can you pls add
> them? I think the design seeems sound ...
> 
> Also the comment on the driver's lock hopefully prevents driver
> maintainers from moving the driver_lock around in a way that would very
> subtle break the scheme, so I think having the acquire barrier commented
> in each place would be really good.

There is already a lot of documentation, I think it would be helpful
if you could suggest some specific places where you think an addition
would help? I think the perspective of someone less familiar with this
design would really improve the documentation

I've been tempted to force the driver to store the seq number directly
under the driver lock - this makes the scheme much clearer, ie
something like this:

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 712c99918551bc..738fa670dcfb19 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -488,7 +488,8 @@ struct svm_notifier {
 };
 
 static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
-                                        const struct mmu_notifier_range *range)
+                                        const struct mmu_notifier_range *range,
+                                        unsigned long seq)
 {
        struct svm_notifier *sn =
                container_of(mrn, struct svm_notifier, notifier);
@@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
                mutex_lock(&sn->svmm->mutex);
        else if (!mutex_trylock(&sn->svmm->mutex))
                return false;
+       mmu_range_notifier_update_seq(mrn, seq);
        mutex_unlock(&sn->svmm->mutex);
        return true;
 }


At the cost of making the driver a bit more complex, what do you
think?

Jason

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-22 15:01                       ` Jason Gunthorpe
@ 2019-10-23  9:08                         ` Daniel Vetter
  2019-10-23  9:32                           ` Christian König
  0 siblings, 1 reply; 58+ messages in thread
From: Daniel Vetter @ 2019-10-23  9:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Koenig, Christian, Andrea Arcangeli, Yang, Philip,
	Ralph Campbell, linux-rdma, John Hubbard, Kuehling, Felix,
	amd-gfx, linux-mm, Jerome Glisse, dri-devel, Ben Skeggs

On Tue, Oct 22, 2019 at 03:01:13PM +0000, Jason Gunthorpe wrote:
> On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
> 
> > > The unusual bit in all of this is using a lock's critical region to
> > > 'protect' data for read, but updating that same data before the lock's
> > > critical secion. ie relying on the unlock barrier to 'release' program
> > > ordered stores done before the lock's own critical region, and the
> > > lock side barrier to 'acquire' those stores.
> > 
> > I think this unusual use of locks as barriers for other unlocked accesses
> > deserves comments even more than just normal barriers. Can you pls add
> > them? I think the design seeems sound ...
> > 
> > Also the comment on the driver's lock hopefully prevents driver
> > maintainers from moving the driver_lock around in a way that would very
> > subtle break the scheme, so I think having the acquire barrier commented
> > in each place would be really good.
> 
> There is already a lot of documentation, I think it would be helpful
> if you could suggest some specific places where you think an addition
> would help? I think the perspective of someone less familiar with this
> design would really improve the documentation

Hm I just meant the usual recommendation that "barriers must have comments
explaining what they order, and where the other side of the barrier is".
Using unlock/lock as a barrier imo just makes that an even better idea.
Usually what I do is something like "we need to order $this against $that
below, and the other side of this barrier is in function()." With maybe a
bit more if it's not obvious how things go wrong if the orderin is broken.

Ofc seqlock.h itself skimps on that rule and doesn't bother explaining its
barriers :-/

> I've been tempted to force the driver to store the seq number directly
> under the driver lock - this makes the scheme much clearer, ie
> something like this:
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
> index 712c99918551bc..738fa670dcfb19 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
> @@ -488,7 +488,8 @@ struct svm_notifier {
>  };
>  
>  static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
> -                                        const struct mmu_notifier_range *range)
> +                                        const struct mmu_notifier_range *range,
> +                                        unsigned long seq)
>  {
>         struct svm_notifier *sn =
>                 container_of(mrn, struct svm_notifier, notifier);
> @@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>                 mutex_lock(&sn->svmm->mutex);
>         else if (!mutex_trylock(&sn->svmm->mutex))
>                 return false;
> +       mmu_range_notifier_update_seq(mrn, seq);
>         mutex_unlock(&sn->svmm->mutex);
>         return true;
>  }
> 
> 
> At the cost of making the driver a bit more complex, what do you
> think?

Hm, spinning this further ... could we initialize the mmu range notifier
with a pointer to the driver lock, so that we could put a
lockdep_assert_held into mmu_range_notifier_update_seq? I think that would
make this scheme substantially more driver-hacker proof :-)

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-23  9:08                         ` Daniel Vetter
@ 2019-10-23  9:32                           ` Christian König
  2019-10-23 16:52                             ` Jerome Glisse
  0 siblings, 1 reply; 58+ messages in thread
From: Christian König @ 2019-10-23  9:32 UTC (permalink / raw)
  To: Jason Gunthorpe, Koenig, Christian, Andrea Arcangeli, Yang,
	Philip, Ralph Campbell, linux-rdma, John Hubbard, Kuehling,
	Felix, amd-gfx, linux-mm, Jerome Glisse, dri-devel, Ben Skeggs

Am 23.10.19 um 11:08 schrieb Daniel Vetter:
> On Tue, Oct 22, 2019 at 03:01:13PM +0000, Jason Gunthorpe wrote:
>> On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
>>
>>>> The unusual bit in all of this is using a lock's critical region to
>>>> 'protect' data for read, but updating that same data before the lock's
>>>> critical secion. ie relying on the unlock barrier to 'release' program
>>>> ordered stores done before the lock's own critical region, and the
>>>> lock side barrier to 'acquire' those stores.
>>> I think this unusual use of locks as barriers for other unlocked accesses
>>> deserves comments even more than just normal barriers. Can you pls add
>>> them? I think the design seeems sound ...
>>>
>>> Also the comment on the driver's lock hopefully prevents driver
>>> maintainers from moving the driver_lock around in a way that would very
>>> subtle break the scheme, so I think having the acquire barrier commented
>>> in each place would be really good.
>> There is already a lot of documentation, I think it would be helpful
>> if you could suggest some specific places where you think an addition
>> would help? I think the perspective of someone less familiar with this
>> design would really improve the documentation
> Hm I just meant the usual recommendation that "barriers must have comments
> explaining what they order, and where the other side of the barrier is".
> Using unlock/lock as a barrier imo just makes that an even better idea.
> Usually what I do is something like "we need to order $this against $that
> below, and the other side of this barrier is in function()." With maybe a
> bit more if it's not obvious how things go wrong if the orderin is broken.
>
> Ofc seqlock.h itself skimps on that rule and doesn't bother explaining its
> barriers :-/
>
>> I've been tempted to force the driver to store the seq number directly
>> under the driver lock - this makes the scheme much clearer, ie
>> something like this:
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> index 712c99918551bc..738fa670dcfb19 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> @@ -488,7 +488,8 @@ struct svm_notifier {
>>   };
>>   
>>   static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>> -                                        const struct mmu_notifier_range *range)
>> +                                        const struct mmu_notifier_range *range,
>> +                                        unsigned long seq)
>>   {
>>          struct svm_notifier *sn =
>>                  container_of(mrn, struct svm_notifier, notifier);
>> @@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>>                  mutex_lock(&sn->svmm->mutex);
>>          else if (!mutex_trylock(&sn->svmm->mutex))
>>                  return false;
>> +       mmu_range_notifier_update_seq(mrn, seq);
>>          mutex_unlock(&sn->svmm->mutex);
>>          return true;
>>   }
>>
>>
>> At the cost of making the driver a bit more complex, what do you
>> think?
> Hm, spinning this further ... could we initialize the mmu range notifier
> with a pointer to the driver lock, so that we could put a
> lockdep_assert_held into mmu_range_notifier_update_seq? I think that would
> make this scheme substantially more driver-hacker proof :-)

Going another step further.... what hinders us to put the lock into the 
mmu range notifier itself and have _lock()/_unlock() helpers?

I mean having the lock in the driver only makes sense when the driver 
would be using the same lock for multiple things, e.g. multiple MMU 
range notifiers under the same lock. But I really don't see that use 
case here.

Regards,
Christian.

>
> Cheers, Daniel


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-23  9:32                           ` Christian König
@ 2019-10-23 16:52                             ` Jerome Glisse
  2019-10-23 17:24                               ` Jason Gunthorpe
  0 siblings, 1 reply; 58+ messages in thread
From: Jerome Glisse @ 2019-10-23 16:52 UTC (permalink / raw)
  To: christian.koenig
  Cc: Jason Gunthorpe, Andrea Arcangeli, Yang, Philip, Ralph Campbell,
	linux-rdma, John Hubbard, Kuehling, Felix, amd-gfx, linux-mm,
	dri-devel, Ben Skeggs

On Wed, Oct 23, 2019 at 11:32:16AM +0200, Christian König wrote:
> Am 23.10.19 um 11:08 schrieb Daniel Vetter:
> > On Tue, Oct 22, 2019 at 03:01:13PM +0000, Jason Gunthorpe wrote:
> > > On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
> > > 
> > > > > The unusual bit in all of this is using a lock's critical region to
> > > > > 'protect' data for read, but updating that same data before the lock's
> > > > > critical secion. ie relying on the unlock barrier to 'release' program
> > > > > ordered stores done before the lock's own critical region, and the
> > > > > lock side barrier to 'acquire' those stores.
> > > > I think this unusual use of locks as barriers for other unlocked accesses
> > > > deserves comments even more than just normal barriers. Can you pls add
> > > > them? I think the design seeems sound ...
> > > > 
> > > > Also the comment on the driver's lock hopefully prevents driver
> > > > maintainers from moving the driver_lock around in a way that would very
> > > > subtle break the scheme, so I think having the acquire barrier commented
> > > > in each place would be really good.
> > > There is already a lot of documentation, I think it would be helpful
> > > if you could suggest some specific places where you think an addition
> > > would help? I think the perspective of someone less familiar with this
> > > design would really improve the documentation
> > Hm I just meant the usual recommendation that "barriers must have comments
> > explaining what they order, and where the other side of the barrier is".
> > Using unlock/lock as a barrier imo just makes that an even better idea.
> > Usually what I do is something like "we need to order $this against $that
> > below, and the other side of this barrier is in function()." With maybe a
> > bit more if it's not obvious how things go wrong if the orderin is broken.
> > 
> > Ofc seqlock.h itself skimps on that rule and doesn't bother explaining its
> > barriers :-/
> > 
> > > I've been tempted to force the driver to store the seq number directly
> > > under the driver lock - this makes the scheme much clearer, ie
> > > something like this:
> > > 
> > > diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
> > > index 712c99918551bc..738fa670dcfb19 100644
> > > --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
> > > +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
> > > @@ -488,7 +488,8 @@ struct svm_notifier {
> > >   };
> > >   static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
> > > -                                        const struct mmu_notifier_range *range)
> > > +                                        const struct mmu_notifier_range *range,
> > > +                                        unsigned long seq)
> > >   {
> > >          struct svm_notifier *sn =
> > >                  container_of(mrn, struct svm_notifier, notifier);
> > > @@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
> > >                  mutex_lock(&sn->svmm->mutex);
> > >          else if (!mutex_trylock(&sn->svmm->mutex))
> > >                  return false;
> > > +       mmu_range_notifier_update_seq(mrn, seq);
> > >          mutex_unlock(&sn->svmm->mutex);
> > >          return true;
> > >   }
> > > 
> > > 
> > > At the cost of making the driver a bit more complex, what do you
> > > think?
> > Hm, spinning this further ... could we initialize the mmu range notifier
> > with a pointer to the driver lock, so that we could put a
> > lockdep_assert_held into mmu_range_notifier_update_seq? I think that would
> > make this scheme substantially more driver-hacker proof :-)
> 
> Going another step further.... what hinders us to put the lock into the mmu
> range notifier itself and have _lock()/_unlock() helpers?
> 
> I mean having the lock in the driver only makes sense when the driver would
> be using the same lock for multiple things, e.g. multiple MMU range
> notifiers under the same lock. But I really don't see that use case here.

I actualy do, nouveau use one lock to protect the page table and that's the
lock that matter. You can have multiple range for a single page table, idea
being only a sub-set of the process address space is ever accessed by the
GPU and those it is better to focus on this sub-set and track invalidation in
a finer grain.

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-23 16:52                             ` Jerome Glisse
@ 2019-10-23 17:24                               ` Jason Gunthorpe
  2019-10-24  2:16                                 ` Christoph Hellwig
  0 siblings, 1 reply; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-23 17:24 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: christian.koenig, Andrea Arcangeli, Yang, Philip, Ralph Campbell,
	linux-rdma, John Hubbard, Kuehling, Felix, amd-gfx, linux-mm,
	dri-devel, Ben Skeggs

On Wed, Oct 23, 2019 at 12:52:23PM -0400, Jerome Glisse wrote:
> > Going another step further.... what hinders us to put the lock into the mmu
> > range notifier itself and have _lock()/_unlock() helpers?
> > 
> > I mean having the lock in the driver only makes sense when the driver would
> > be using the same lock for multiple things, e.g. multiple MMU range
> > notifiers under the same lock. But I really don't see that use case here.
> 
> I actualy do, nouveau use one lock to protect the page table and that's the
> lock that matter. You can have multiple range for a single page table, idea
> being only a sub-set of the process address space is ever accessed by the
> GPU and those it is better to focus on this sub-set and track invalidation in
> a finer grain.

mlx5 is similar, but not currently coded quite right, there is one
lock that protects the command queue for submitting invalidations to
the HW and it doesn't make a lot of sense to have additional fine
grained locking beyond that.

So I suppose the intent here that most drivers would have a single
'page table lock' that protects the HW's page table update, and this
lock is the one that should be held while upating and checking the
sequence number.

dma_fence based drivers are possibly a little different, I think they
can just use a spinlock, their pattern should probably be something
like

fault:
 hmm_range_fault()

 spin_lock()
 if (mmu_range_read_retry()))
     goto again
 dma_fence_init(mrn->fence)
 spin_unlock()

invalidate:
 spin_lock()
 is_inited = 'dma fence init has been called'
 spin_unlock()
 if (is_inited)
    dma_fence_wait(fence)


I'm not sure, never used dma_fence before. The key thing is that the
dma_fence_wait() cannot block until after the mmu_range_read_retry() &
unlock completes. Otherwise it can deadlock with hmm_range_fault().

It would be nice to figure this out and add it to the hmm.rst as we do
have two drivers using the dma_fence scheme.

Also, the use of a spinlock here probably says we should keep the lock
external.

But, it sounds like the mmu_range_notifier_update_seq() is a good
idea, so let me add that in v2.

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-21 19:06   ` Jason Gunthorpe
@ 2019-10-23 20:26     ` Jerome Glisse
  0 siblings, 0 replies; 58+ messages in thread
From: Jerome Glisse @ 2019-10-23 20:26 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Ralph Campbell, John Hubbard, Felix.Kuehling, linux-rdma,
	linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs

On Mon, Oct 21, 2019 at 07:06:00PM +0000, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:40:41PM -0400, Jerome Glisse wrote:
> > On Tue, Oct 15, 2019 at 03:12:27PM -0300, Jason Gunthorpe wrote:
> > > From: Jason Gunthorpe <jgg@mellanox.com>
> > > 
> > > 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> > > scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> > > they only use invalidate_range_start/end and immediately check the
> > > invalidating range against some driver data structure to tell if the
> > > driver is interested. Half of them use an interval_tree, the others are
> > > simple linear search lists.
> > > 
> > > Of the ones I checked they largely seem to have various kinds of races,
> > > bugs and poor implementation. This is a result of the complexity in how
> > > the notifier interacts with get_user_pages(). It is extremely difficult to
> > > use it correctly.
> > > 
> > > Consolidate all of this code together into the core mmu_notifier and
> > > provide a locking scheme similar to hmm_mirror that allows the user to
> > > safely use get_user_pages() and reliably know if the page list still
> > > matches the mm.
> > > 
> > > This new arrangment plays nicely with the !blockable mode for
> > > OOM. Scanning the interval tree is done such that the intersection test
> > > will always succeed, and since there is no invalidate_range_end exposed to
> > > drivers the scheme safely allows multiple drivers to be subscribed.
> > > 
> > > Four places are converted as an example of how the new API is used.
> > > Four are left for future patches:
> > >  - i915_gem has complex locking around destruction of a registration,
> > >    needs more study
> > >  - hfi1 (2nd user) needs access to the rbtree
> > >  - scif_dma has a complicated logic flow
> > >  - vhost's mmu notifiers are already being rewritten
> > > 
> > > This is still being tested, but I figured to send it to start getting help
> > > from the xen, amd and hfi drivers which I cannot test here.
> > 
> > It might be a good oportunity to also switch those users to
> > hmm_range_fault() instead of GUP as GUP is pointless for those
> > users. In fact the GUP is an impediment to normal mm operations.
> 
> I think vhost can use hmm_range_fault
> 
> hfi1 does actually need to have the page pin, it doesn't fence DMA
> during invalidate.
> 
> i915_gem feels alot like amdgpu, so probably it would benefit
> 
> No idea about scif_dma
> 
> > I will test on nouveau.
> 
> Thanks, hopefully it still works, I think Ralph was able to do some
> basic checks. But it is a pretty complicated series, I probably made
> some mistakes.

So it seems to work ok with nouveau, will let tests run in loop thought
there are not very advance test.

> 
> FWIW, I know that nouveau gets a lockdep splat now from Daniel
> Vetter's recent changes, it tries to do GFP_KERENEL allocations under
> a lock also held by the invalidate_range_start path.

I have not seen any splat so far, is it throught some new kernel config ?

Cheers,
Jérôme


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
  2019-10-23 17:24                               ` Jason Gunthorpe
@ 2019-10-24  2:16                                 ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2019-10-24  2:16 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Jerome Glisse, christian.koenig, Andrea Arcangeli, Yang, Philip,
	Ralph Campbell, linux-rdma, John Hubbard, Kuehling, Felix,
	amd-gfx, linux-mm, dri-devel, Ben Skeggs

On Wed, Oct 23, 2019 at 05:24:45PM +0000, Jason Gunthorpe wrote:
> mlx5 is similar, but not currently coded quite right, there is one
> lock that protects the command queue for submitting invalidations to
> the HW and it doesn't make a lot of sense to have additional fine
> grained locking beyond that.

IFF all drivers could agree on a lock type (rw_semaphore?) for this
protection you could add a pointer to the range which would clear
things up a lot.  I'm just not sure you could get everyone to
agree.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier
  2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
  2019-10-21 18:30   ` Jerome Glisse
@ 2019-10-27 23:15   ` Jason Gunthorpe
  1 sibling, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-10-27 23:15 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Michal Hocko

On Tue, Oct 15, 2019 at 03:12:29PM -0300, Jason Gunthorpe wrote:

> +static void mn_itree_release(struct mmu_notifier_mm *mmn_mm,
> +			     struct mm_struct *mm)
> +{
> +	struct mmu_notifier_range range = {
> +		.flags = MMU_NOTIFIER_RANGE_BLOCKABLE,
> +		.event = MMU_NOTIFY_RELEASE,
> +		.mm = mm,
> +		.start = 0,
> +		.end = ULONG_MAX,
> +	};
> +	struct mmu_range_notifier *mrn;
> +	unsigned long cur_seq;
> +	bool ret;
> +
> +	for (mrn = mn_itree_inv_start_range(mmn_mm, &range, &cur_seq); mrn;
> +	     mrn = mn_itree_inv_next(mrn, &range)) {
> +		ret = mrn->ops->invalidate(mrn, &range);
> +		WARN_ON(ret);

This should be 
  WARN_ON(!ret)

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [Xen-devel] [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER
  2019-10-21 19:12       ` Jason Gunthorpe
@ 2019-10-28  6:25         ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 58+ messages in thread
From: Oleksandr Andrushchenko @ 2019-10-28  6:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Oleksandr Andrushchenko, Jürgen Groß,
	Boris Ostrovsky
  Cc: Andrea Arcangeli, Ralph Campbell, Stefano Stabellini, linux-rdma,
	John Hubbard, Felix.Kuehling, amd-gfx, linux-mm, Jerome Glisse,
	dri-devel, xen-devel, Ben Skeggs

On 10/21/19 10:12 PM, Jason Gunthorpe wrote:
> On Wed, Oct 16, 2019 at 06:35:15AM +0000, Oleksandr Andrushchenko wrote:
>> On 10/16/19 8:11 AM, Jürgen Groß wrote:
>>> On 15.10.19 20:12, Jason Gunthorpe wrote:
>>>> From: Jason Gunthorpe <jgg@mellanox.com>
>>>>
>>>> DMA_SHARED_BUFFER can not be enabled by the user (it represents a
>>>> library
>>>> set in the kernel). The kconfig convention is to use select for such
>>>> symbols so they are turned on implicitly when the user enables a kconfig
>>>> that needs them.
>>>>
>>>> Otherwise the XEN_GNTDEV_DMABUF kconfig is overly difficult to enable.
>>>>
>>>> Fixes: 932d6562179e ("xen/gntdev: Add initial support for dma-buf UAPI")
>>>> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>> Cc: xen-devel@lists.xenproject.org
>>>> Cc: Juergen Gross <jgross@suse.com>
>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>>
>> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Thanks Oleksandr and Juergen, can you also give me some advice on how
> to progress the more complex patch:
>
> https://patchwork.kernel.org/patch/11191369/
>
> Is this gntdev stuff still in-use? I struggled a bit to understand
> what it is doing, but I think I made a reasonable guess?
I think Jurgen and Boris could help here
> Jason
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv
  2019-10-15 18:12 ` [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv Jason Gunthorpe
@ 2019-10-29 12:15   ` Dennis Dalessandro
  0 siblings, 0 replies; 58+ messages in thread
From: Dennis Dalessandro @ 2019-10-29 12:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Jerome Glisse, Ralph Campbell, John Hubbard,
	Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx,
	Ben Skeggs, Jason Gunthorpe, Mike Marciniszyn

On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> This converts one of the two users of mmu_notifiers to use the new API.
> The conversion is fairly straightforward, however the existing use of
> notifiers here seems to be racey.
> 
> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Typo in subject s/inset/insert.

Tested-by: Dennis Dalessandro <dennis.dalessandro@intel.com>

Thanks

-Denny

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert()
  2019-10-15 18:12 ` [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert() Jason Gunthorpe
@ 2019-11-04 20:25   ` Jason Gunthorpe
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-11-04 20:25 UTC (permalink / raw)
  To: Jerome Glisse, Ralph Campbell, John Hubbard, Felix.Kuehling
  Cc: linux-rdma, linux-mm, Andrea Arcangeli, dri-devel, amd-gfx, Ben Skeggs

On Tue, Oct 15, 2019 at 03:12:32PM -0300, Jason Gunthorpe wrote:
> @@ -250,26 +85,15 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp)
>  			ret = -ENOMEM;
>  			goto out_page_list;
>  		}
> -	}
>  
> -	mn = mmu_notifier_get(&ib_umem_notifiers, umem_odp->umem.owning_mm);
> -	if (IS_ERR(mn)) {
> -		ret = PTR_ERR(mn);
> -		goto out_dma_list;
> -	}
> -	umem_odp->per_mm = per_mm =
> -		container_of(mn, struct ib_ucontext_per_mm, mn);
> -
> -	mutex_init(&umem_odp->umem_mutex);
> -	init_completion(&umem_odp->notifier_completion);
> +		ret = mmu_range_notifier_insert(&umem_odp->notifier, start,
> +						end - start, current->mm);
> +		if (ret)
> +			goto out_dma_list;

It turns out 'current' can't be used here as this can be called from the
page fault work queue and should be 'umem_odp->umem.owning_mm'

The same problem applies to the tgid a few lines below

It also seems there is a pre-existing problem here as this code
doesn't guarentee to have a mmget() on the mm for the non-current case
when it called mmu_notifier_get() or now
mmu_range_notifier_insert(). 

I'll fix this in a dedicated patch.

This incremental sorts it out, I'll squash it into this patch:

diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 6132b8127e8435..0768bb60ce1662 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -87,12 +87,10 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp)
 		}
 
 		ret = mmu_range_notifier_insert(&umem_odp->notifier, start,
-						end - start, current->mm);
+						end - start,
+						umem_odp->umem.owning_mm);
 		if (ret)
 			goto out_dma_list;
-
-		umem_odp->tgid =
-			get_task_pid(current->group_leader, PIDTYPE_PID);
 	}
 
 	return 0;
@@ -140,8 +138,10 @@ ib_umem_odp_alloc_implicit(struct ib_udata *udata, int access)
 	umem_odp->is_implicit_odp = 1;
 	umem_odp->page_shift = PAGE_SHIFT;
 
+	umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID);
 	ret = ib_init_umem_odp(umem_odp);
 	if (ret) {
+		put_pid(umem_odp->tgid);
 		kfree(umem_odp);
 		return ERR_PTR(ret);
 	}
@@ -185,8 +185,10 @@ ib_umem_odp_alloc_child(struct ib_umem_odp *root, unsigned long addr,
 	odp_data->page_shift = PAGE_SHIFT;
 	odp_data->notifier.ops = ops;
 
+	odp_data->tgid = get_pid(root->tgid);
 	ret = ib_init_umem_odp(odp_data);
 	if (ret) {
+		put_pid(odp_data->tgid);
 		kfree(odp_data);
 		return ERR_PTR(ret);
 	}
@@ -254,11 +256,14 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr,
 		up_read(&mm->mmap_sem);
 	}
 
+	umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID);
 	ret = ib_init_umem_odp(umem_odp);
 	if (ret)
-		goto err_free;
+		goto err_put_pid;
 	return umem_odp;
 
+err_put_pid:
+	put_pid(umem_odp->tgid);
 err_free:
 	kfree(umem_odp);
 	return ERR_PTR(ret);

^ permalink raw reply related	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2019-11-04 20:25 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
2019-10-21 18:32   ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
2019-10-21 18:30   ` Jerome Glisse
2019-10-21 18:54     ` Jason Gunthorpe
2019-10-21 19:11       ` Jerome Glisse
2019-10-21 19:24         ` Jason Gunthorpe
2019-10-21 19:47           ` Jerome Glisse
2019-10-27 23:15   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror Jason Gunthorpe
2019-10-21 18:33   ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe
2019-10-21 18:31   ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert() Jason Gunthorpe
2019-11-04 20:25   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv Jason Gunthorpe
2019-10-29 12:15   ` Dennis Dalessandro
2019-10-15 18:12 ` [PATCH hmm 07/15] drm/radeon: use mmu_range_notifier_insert Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER Jason Gunthorpe
2019-10-16  5:11   ` Jürgen Groß
2019-10-16  6:35     ` Oleksandr Andrushchenko
2019-10-21 19:12       ` Jason Gunthorpe
2019-10-28  6:25         ` [Xen-devel] " Oleksandr Andrushchenko
2019-10-15 18:12 ` [PATCH hmm 09/15] xen/gntdev: use mmu_range_notifier_insert Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 12/15] drm/amdgpu: Call find_vma under mmap_sem Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 14/15] drm/amdgpu: Use mmu_range_notifier " Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related Jason Gunthorpe
2019-10-21 18:38   ` Jerome Glisse
2019-10-21 18:57     ` Jason Gunthorpe
2019-10-21 19:19       ` Jerome Glisse
2019-10-16  8:58 ` [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Christian König
2019-10-16 16:04   ` Jason Gunthorpe
2019-10-17  8:54     ` Christian König
2019-10-17 16:26       ` Yang, Philip
2019-10-17 16:47         ` Koenig, Christian
2019-10-18 20:36           ` Jason Gunthorpe
2019-10-20 14:21             ` Koenig, Christian
2019-10-21 13:57               ` Jason Gunthorpe
2019-10-21 14:28                 ` Koenig, Christian
2019-10-21 15:12                   ` Jason Gunthorpe
2019-10-22  7:57                     ` Daniel Vetter
2019-10-22 15:01                       ` Jason Gunthorpe
2019-10-23  9:08                         ` Daniel Vetter
2019-10-23  9:32                           ` Christian König
2019-10-23 16:52                             ` Jerome Glisse
2019-10-23 17:24                               ` Jason Gunthorpe
2019-10-24  2:16                                 ` Christoph Hellwig
2019-10-21 15:55 ` Dennis Dalessandro
2019-10-21 16:58   ` Jason Gunthorpe
2019-10-22 11:56     ` Dennis Dalessandro
2019-10-22 14:37       ` Jason Gunthorpe
2019-10-21 18:40 ` Jerome Glisse
2019-10-21 19:06   ` Jason Gunthorpe
2019-10-23 20:26     ` Jerome Glisse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).