All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org,
	linux-mm@kvack.org
Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org,
	robin.murphy@arm.com, kevin.tian@intel.com,
	baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com,
	jacob.jun.pan@linux.intel.com, christian.koenig@amd.com,
	zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Arnd Bergmann <arnd@arndb.de>,
	Christoph Hellwig <hch@infradead.org>,
	Dimitri Sivanich <sivanich@sgi.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Subject: [PATCH v5 01/25] mm/mmu_notifiers: pass private data down to alloc_notifier()
Date: Tue, 14 Apr 2020 19:02:29 +0200	[thread overview]
Message-ID: <20200414170252.714402-2-jean-philippe@linaro.org> (raw)
In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org>

The new allocation scheme introduced by commit 2c7933f53f6b
("mm/mmu_notifiers: add a get/put scheme for the registration") provides
a convenient way for users to attach notifier data to an mm. However, it
would be even better to create this notifier data atomically.

Since the alloc_notifier() callback only takes an mm argument at the
moment, some users have to perform the allocation in two times.
alloc_notifier() initially creates an incomplete structure, which is
then finalized using more context once mmu_notifier_get() returns. This
second step requires extra care to order memory accesses against live
invalidation.

The IOMMU SVA module, which attaches an mm to multiple devices,
exemplifies this situation. In essence it does:

	mmu_notifier_get()
	  alloc_notifier()
	     A = kzalloc()
	  /* MMU notifier is published */
	A->ctx = ctx;				// (1)
	device->A = A;
	list_add_rcu(device, A->devices);	// (2)

The invalidate notifier, which may start running before A is fully
initialized, does the following:

	io_mm_invalidate(A)
	  list_for_each_entry_rcu(device, A->devices)
	    device->invalidate(A->ctx)

The invalidate() thread must observe the initialization (1) before (2),
which is easily solved by fully initializing object A in
alloc_notifier(), before publishing the MMU notifier.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v4->v5: provide example in commit message, fix style.
---
 include/linux/mmu_notifier.h       | 11 +++++++----
 drivers/misc/sgi-gru/grutlbpurge.c |  5 +++--
 mm/mmu_notifier.c                  |  6 ++++--
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 736f6918335ed..0536fe85e7457 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -207,7 +207,8 @@ struct mmu_notifier_ops {
 	 * callbacks are currently running. It is called from a SRCU callback
 	 * and cannot sleep.
 	 */
-	struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm);
+	struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm,
+					       void *privdata);
 	void (*free_notifier)(struct mmu_notifier *subscription);
 };
 
@@ -271,14 +272,16 @@ static inline int mm_has_notifiers(struct mm_struct *mm)
 }
 
 struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
-					     struct mm_struct *mm);
+					     struct mm_struct *mm,
+					     void *privdata);
 static inline struct mmu_notifier *
-mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm)
+mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm,
+		 void *privdata)
 {
 	struct mmu_notifier *ret;
 
 	down_write(&mm->mmap_sem);
-	ret = mmu_notifier_get_locked(ops, mm);
+	ret = mmu_notifier_get_locked(ops, mm, privdata);
 	up_write(&mm->mmap_sem);
 	return ret;
 }
diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
index 10921cd2608df..336e1b1df072f 100644
--- a/drivers/misc/sgi-gru/grutlbpurge.c
+++ b/drivers/misc/sgi-gru/grutlbpurge.c
@@ -235,7 +235,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
 		gms, range->start, range->end);
 }
 
-static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm)
+static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm,
+					       void *privdata)
 {
 	struct gru_mm_struct *gms;
 
@@ -266,7 +267,7 @@ struct gru_mm_struct *gru_register_mmu_notifier(void)
 {
 	struct mmu_notifier *mn;
 
-	mn = mmu_notifier_get_locked(&gru_mmuops, current->mm);
+	mn = mmu_notifier_get_locked(&gru_mmuops, current->mm, NULL);
 	if (IS_ERR(mn))
 		return ERR_CAST(mn);
 
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 06852b896fa63..6b9bfb8ca94d2 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -743,6 +743,7 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops)
  *                           the mm & ops
  * @ops: The operations struct being subscribe with
  * @mm : The mm to attach notifiers too
+ * @privdata: Initialization data passed down to ops->alloc_notifier()
  *
  * This function either allocates a new mmu_notifier via
  * ops->alloc_notifier(), or returns an already existing notifier on the
@@ -756,7 +757,8 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops)
  * and can be converted to an active mm pointer via mmget_not_zero().
  */
 struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
-					     struct mm_struct *mm)
+					     struct mm_struct *mm,
+					     void *privdata)
 {
 	struct mmu_notifier *subscription;
 	int ret;
@@ -769,7 +771,7 @@ struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
 			return subscription;
 	}
 
-	subscription = ops->alloc_notifier(mm);
+	subscription = ops->alloc_notifier(mm, privdata);
 	if (IS_ERR(subscription))
 		return subscription;
 	subscription->ops = ops;
-- 
2.26.0


WARNING: multiple messages have this Message-ID (diff)
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org,
	linux-mm@kvack.org
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>,
	kevin.tian@intel.com, Andrew Morton <akpm@linux-foundation.org>,
	Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	catalin.marinas@arm.com, robin.murphy@arm.com, jgg@ziepe.ca,
	Dimitri Sivanich <sivanich@sgi.com>,
	zhangfei.gao@linaro.org, will@kernel.org,
	christian.koenig@amd.com
Subject: [PATCH v5 01/25] mm/mmu_notifiers: pass private data down to alloc_notifier()
Date: Tue, 14 Apr 2020 19:02:29 +0200	[thread overview]
Message-ID: <20200414170252.714402-2-jean-philippe@linaro.org> (raw)
In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org>

The new allocation scheme introduced by commit 2c7933f53f6b
("mm/mmu_notifiers: add a get/put scheme for the registration") provides
a convenient way for users to attach notifier data to an mm. However, it
would be even better to create this notifier data atomically.

Since the alloc_notifier() callback only takes an mm argument at the
moment, some users have to perform the allocation in two times.
alloc_notifier() initially creates an incomplete structure, which is
then finalized using more context once mmu_notifier_get() returns. This
second step requires extra care to order memory accesses against live
invalidation.

The IOMMU SVA module, which attaches an mm to multiple devices,
exemplifies this situation. In essence it does:

	mmu_notifier_get()
	  alloc_notifier()
	     A = kzalloc()
	  /* MMU notifier is published */
	A->ctx = ctx;				// (1)
	device->A = A;
	list_add_rcu(device, A->devices);	// (2)

The invalidate notifier, which may start running before A is fully
initialized, does the following:

	io_mm_invalidate(A)
	  list_for_each_entry_rcu(device, A->devices)
	    device->invalidate(A->ctx)

The invalidate() thread must observe the initialization (1) before (2),
which is easily solved by fully initializing object A in
alloc_notifier(), before publishing the MMU notifier.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v4->v5: provide example in commit message, fix style.
---
 include/linux/mmu_notifier.h       | 11 +++++++----
 drivers/misc/sgi-gru/grutlbpurge.c |  5 +++--
 mm/mmu_notifier.c                  |  6 ++++--
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 736f6918335ed..0536fe85e7457 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -207,7 +207,8 @@ struct mmu_notifier_ops {
 	 * callbacks are currently running. It is called from a SRCU callback
 	 * and cannot sleep.
 	 */
-	struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm);
+	struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm,
+					       void *privdata);
 	void (*free_notifier)(struct mmu_notifier *subscription);
 };
 
@@ -271,14 +272,16 @@ static inline int mm_has_notifiers(struct mm_struct *mm)
 }
 
 struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
-					     struct mm_struct *mm);
+					     struct mm_struct *mm,
+					     void *privdata);
 static inline struct mmu_notifier *
-mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm)
+mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm,
+		 void *privdata)
 {
 	struct mmu_notifier *ret;
 
 	down_write(&mm->mmap_sem);
-	ret = mmu_notifier_get_locked(ops, mm);
+	ret = mmu_notifier_get_locked(ops, mm, privdata);
 	up_write(&mm->mmap_sem);
 	return ret;
 }
diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
index 10921cd2608df..336e1b1df072f 100644
--- a/drivers/misc/sgi-gru/grutlbpurge.c
+++ b/drivers/misc/sgi-gru/grutlbpurge.c
@@ -235,7 +235,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
 		gms, range->start, range->end);
 }
 
-static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm)
+static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm,
+					       void *privdata)
 {
 	struct gru_mm_struct *gms;
 
@@ -266,7 +267,7 @@ struct gru_mm_struct *gru_register_mmu_notifier(void)
 {
 	struct mmu_notifier *mn;
 
-	mn = mmu_notifier_get_locked(&gru_mmuops, current->mm);
+	mn = mmu_notifier_get_locked(&gru_mmuops, current->mm, NULL);
 	if (IS_ERR(mn))
 		return ERR_CAST(mn);
 
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 06852b896fa63..6b9bfb8ca94d2 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -743,6 +743,7 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops)
  *                           the mm & ops
  * @ops: The operations struct being subscribe with
  * @mm : The mm to attach notifiers too
+ * @privdata: Initialization data passed down to ops->alloc_notifier()
  *
  * This function either allocates a new mmu_notifier via
  * ops->alloc_notifier(), or returns an already existing notifier on the
@@ -756,7 +757,8 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops)
  * and can be converted to an active mm pointer via mmget_not_zero().
  */
 struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
-					     struct mm_struct *mm)
+					     struct mm_struct *mm,
+					     void *privdata)
 {
 	struct mmu_notifier *subscription;
 	int ret;
@@ -769,7 +771,7 @@ struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
 			return subscription;
 	}
 
-	subscription = ops->alloc_notifier(mm);
+	subscription = ops->alloc_notifier(mm, privdata);
 	if (IS_ERR(subscription))
 		return subscription;
 	subscription->ops = ops;
-- 
2.26.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org,
	linux-mm@kvack.org
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>,
	kevin.tian@intel.com, jacob.jun.pan@linux.intel.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	catalin.marinas@arm.com, joro@8bytes.org, robin.murphy@arm.com,
	Christoph Hellwig <hch@infradead.org>,
	jgg@ziepe.ca, Dimitri Sivanich <sivanich@sgi.com>,
	Jonathan.Cameron@huawei.com, zhangfei.gao@linaro.org,
	xuzaibo@huawei.com, will@kernel.org, christian.koenig@amd.com,
	baolu.lu@linux.intel.com
Subject: [PATCH v5 01/25] mm/mmu_notifiers: pass private data down to alloc_notifier()
Date: Tue, 14 Apr 2020 19:02:29 +0200	[thread overview]
Message-ID: <20200414170252.714402-2-jean-philippe@linaro.org> (raw)
In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org>

The new allocation scheme introduced by commit 2c7933f53f6b
("mm/mmu_notifiers: add a get/put scheme for the registration") provides
a convenient way for users to attach notifier data to an mm. However, it
would be even better to create this notifier data atomically.

Since the alloc_notifier() callback only takes an mm argument at the
moment, some users have to perform the allocation in two times.
alloc_notifier() initially creates an incomplete structure, which is
then finalized using more context once mmu_notifier_get() returns. This
second step requires extra care to order memory accesses against live
invalidation.

The IOMMU SVA module, which attaches an mm to multiple devices,
exemplifies this situation. In essence it does:

	mmu_notifier_get()
	  alloc_notifier()
	     A = kzalloc()
	  /* MMU notifier is published */
	A->ctx = ctx;				// (1)
	device->A = A;
	list_add_rcu(device, A->devices);	// (2)

The invalidate notifier, which may start running before A is fully
initialized, does the following:

	io_mm_invalidate(A)
	  list_for_each_entry_rcu(device, A->devices)
	    device->invalidate(A->ctx)

The invalidate() thread must observe the initialization (1) before (2),
which is easily solved by fully initializing object A in
alloc_notifier(), before publishing the MMU notifier.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v4->v5: provide example in commit message, fix style.
---
 include/linux/mmu_notifier.h       | 11 +++++++----
 drivers/misc/sgi-gru/grutlbpurge.c |  5 +++--
 mm/mmu_notifier.c                  |  6 ++++--
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 736f6918335ed..0536fe85e7457 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -207,7 +207,8 @@ struct mmu_notifier_ops {
 	 * callbacks are currently running. It is called from a SRCU callback
 	 * and cannot sleep.
 	 */
-	struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm);
+	struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm,
+					       void *privdata);
 	void (*free_notifier)(struct mmu_notifier *subscription);
 };
 
@@ -271,14 +272,16 @@ static inline int mm_has_notifiers(struct mm_struct *mm)
 }
 
 struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
-					     struct mm_struct *mm);
+					     struct mm_struct *mm,
+					     void *privdata);
 static inline struct mmu_notifier *
-mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm)
+mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm,
+		 void *privdata)
 {
 	struct mmu_notifier *ret;
 
 	down_write(&mm->mmap_sem);
-	ret = mmu_notifier_get_locked(ops, mm);
+	ret = mmu_notifier_get_locked(ops, mm, privdata);
 	up_write(&mm->mmap_sem);
 	return ret;
 }
diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
index 10921cd2608df..336e1b1df072f 100644
--- a/drivers/misc/sgi-gru/grutlbpurge.c
+++ b/drivers/misc/sgi-gru/grutlbpurge.c
@@ -235,7 +235,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
 		gms, range->start, range->end);
 }
 
-static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm)
+static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm,
+					       void *privdata)
 {
 	struct gru_mm_struct *gms;
 
@@ -266,7 +267,7 @@ struct gru_mm_struct *gru_register_mmu_notifier(void)
 {
 	struct mmu_notifier *mn;
 
-	mn = mmu_notifier_get_locked(&gru_mmuops, current->mm);
+	mn = mmu_notifier_get_locked(&gru_mmuops, current->mm, NULL);
 	if (IS_ERR(mn))
 		return ERR_CAST(mn);
 
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 06852b896fa63..6b9bfb8ca94d2 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -743,6 +743,7 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops)
  *                           the mm & ops
  * @ops: The operations struct being subscribe with
  * @mm : The mm to attach notifiers too
+ * @privdata: Initialization data passed down to ops->alloc_notifier()
  *
  * This function either allocates a new mmu_notifier via
  * ops->alloc_notifier(), or returns an already existing notifier on the
@@ -756,7 +757,8 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops)
  * and can be converted to an active mm pointer via mmget_not_zero().
  */
 struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
-					     struct mm_struct *mm)
+					     struct mm_struct *mm,
+					     void *privdata)
 {
 	struct mmu_notifier *subscription;
 	int ret;
@@ -769,7 +771,7 @@ struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops,
 			return subscription;
 	}
 
-	subscription = ops->alloc_notifier(mm);
+	subscription = ops->alloc_notifier(mm, privdata);
 	if (IS_ERR(subscription))
 		return subscription;
 	subscription->ops = ops;
-- 
2.26.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-04-14 17:04 UTC|newest]

Thread overview: 133+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-14 17:02 [PATCH v5 00/25] iommu: Shared Virtual Addressing and SMMUv3 support Jean-Philippe Brucker
2020-04-14 17:02 ` Jean-Philippe Brucker
2020-04-14 17:02 ` Jean-Philippe Brucker
2020-04-14 17:02 ` Jean-Philippe Brucker [this message]
2020-04-14 17:02   ` [PATCH v5 01/25] mm/mmu_notifiers: pass private data down to alloc_notifier() Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 18:09   ` Jason Gunthorpe
2020-04-14 18:09     ` Jason Gunthorpe
2020-04-14 18:09     ` Jason Gunthorpe
2020-04-14 17:02 ` [PATCH v5 02/25] iommu/sva: Manage process address spaces Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-16  7:28   ` Christoph Hellwig
2020-04-16  7:28     ` Christoph Hellwig
2020-04-16  7:28     ` Christoph Hellwig
2020-04-16  8:54     ` Jean-Philippe Brucker
2020-04-16  8:54       ` Jean-Philippe Brucker
2020-04-16  8:54       ` Jean-Philippe Brucker
2020-04-16 12:13       ` Christoph Hellwig
2020-04-16 12:13         ` Christoph Hellwig
2020-04-16 12:13         ` Christoph Hellwig
2020-04-20  7:42         ` Jean-Philippe Brucker
2020-04-20  7:42           ` Jean-Philippe Brucker
2020-04-20  7:42           ` Jean-Philippe Brucker
2020-04-20  8:10           ` Christoph Hellwig
2020-04-20  8:10             ` Christoph Hellwig
2020-04-20  8:10             ` Christoph Hellwig
2020-04-20 11:44             ` Christian König
2020-04-20 11:44               ` Christian König
2020-04-20 11:44               ` Christian König
2020-04-20 11:55               ` Christoph Hellwig
2020-04-20 11:55                 ` Christoph Hellwig
2020-04-20 11:55                 ` Christoph Hellwig
2020-04-20 12:40                 ` Christian König
2020-04-20 12:40                   ` Christian König
2020-04-20 12:40                   ` Christian König
2020-04-20 15:00                   ` Felix Kuehling
2020-04-20 15:00                     ` Felix Kuehling
2020-04-20 15:00                     ` Felix Kuehling
2020-04-20 17:44                     ` Jacob Pan
2020-04-20 17:44                       ` Jacob Pan
2020-04-20 17:44                       ` Jacob Pan
2020-04-20 17:52                       ` Felix Kuehling
2020-04-20 17:52                         ` Felix Kuehling
2020-04-20 13:57           ` Jason Gunthorpe
2020-04-20 13:57             ` Jason Gunthorpe
2020-04-20 13:57             ` Jason Gunthorpe
2020-04-20 17:48             ` Jacob Pan
2020-04-20 17:48               ` Jacob Pan
2020-04-20 17:48               ` Jacob Pan
2020-04-20 18:14               ` Fenghua Yu
2020-04-20 18:14                 ` Fenghua Yu
2020-04-20 18:14                 ` Fenghua Yu
2020-04-21  8:55                 ` Christoph Hellwig
2020-04-21  8:55                   ` Christoph Hellwig
2020-04-21  8:55                   ` Christoph Hellwig
2020-04-14 17:02 ` [PATCH v5 03/25] iommu: Add a page fault handler Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 04/25] iommu/sva: Search mm by PASID Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 05/25] iommu/iopf: Handle mm faults Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 06/25] iommu/sva: Register page fault handler Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 07/25] arm64: mm: Add asid_gen_match() helper Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 08/25] arm64: mm: Pin down ASIDs for sharing mm with devices Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 09/25] iommu/io-pgtable-arm: Move some definitions to a header Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 10/25] iommu/arm-smmu-v3: Manage ASIDs with xarray Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 11/25] arm64: cpufeature: Export symbol read_sanitised_ftr_reg() Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 12/25] iommu/arm-smmu-v3: Share process page tables Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 13/25] iommu/arm-smmu-v3: Seize private ASID Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 14/25] iommu/arm-smmu-v3: Add support for VHE Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 15/25] iommu/arm-smmu-v3: Enable broadcast TLB maintenance Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 16/25] iommu/arm-smmu-v3: Add SVA feature checking Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 17/25] iommu/arm-smmu-v3: Implement mm operations Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 18/25] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 19/25] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 20/25] iommu/arm-smmu-v3: Maintain a SID->device structure Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 21/25] dt-bindings: document stall property for IOMMU masters Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 22/25] iommu/arm-smmu-v3: Add stall support for platform devices Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02 ` [PATCH v5 23/25] PCI/ATS: Add PRI stubs Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 18:03   ` Kuppuswamy, Sathyanarayanan
2020-04-14 18:03     ` Kuppuswamy, Sathyanarayanan
2020-04-14 18:03     ` Kuppuswamy, Sathyanarayanan
2020-04-14 18:03     ` Kuppuswamy, Sathyanarayanan
2020-04-14 17:02 ` [PATCH v5 24/25] PCI/ATS: Export PRI functions Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 18:03   ` Kuppuswamy, Sathyanarayanan
2020-04-14 18:03     ` Kuppuswamy, Sathyanarayanan
2020-04-14 18:03     ` Kuppuswamy, Sathyanarayanan
2020-04-14 18:03     ` Kuppuswamy, Sathyanarayanan
2020-04-14 17:02 ` [PATCH v5 25/25] iommu/arm-smmu-v3: Add support for PRI Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker
2020-04-14 17:02   ` Jean-Philippe Brucker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200414170252.714402-2-jean-philippe@linaro.org \
    --to=jean-philippe@linaro.org \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=baolu.lu@linux.intel.com \
    --cc=catalin.marinas@arm.com \
    --cc=christian.koenig@amd.com \
    --cc=devicetree@vger.kernel.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jgg@ziepe.ca \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=sivanich@sgi.com \
    --cc=will@kernel.org \
    --cc=xuzaibo@huawei.com \
    --cc=zhangfei.gao@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.