All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-03-23 21:06 ` Nadav Amit
  0 siblings, 0 replies; 14+ messages in thread
From: Nadav Amit @ 2021-03-23 21:06 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon; +Cc: Nadav Amit, Jiajun Cao, iommu, linux-kernel

From: Nadav Amit <namit@vmware.com>

Currently, IOMMU invalidations and device-IOTLB invalidations using
AMD IOMMU fall back to full address-space invalidation if more than a
single page need to be flushed.

Full flushes are especially inefficient when the IOMMU is virtualized by
a hypervisor, since it requires the hypervisor to synchronize the entire
address-space.

AMD IOMMUs allow to provide a mask to perform page-specific
invalidations for multiple pages that match the address. The mask is
encoded as part of the address, and the first zero bit in the address
(in bits [51:12]) indicates the mask size.

Use this hardware feature to perform selective IOMMU and IOTLB flushes.
Combine the logic between both for better code reuse.

The IOMMU invalidations passed a smoke-test. The device IOTLB
invalidations are untested.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 76 +++++++++++++++++++++------------------
 1 file changed, 42 insertions(+), 34 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 9256f84f5ebf..5f2dc3d7f2dc 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -927,33 +927,58 @@ static void build_inv_dte(struct iommu_cmd *cmd, u16 devid)
 	CMD_SET_TYPE(cmd, CMD_INV_DEV_ENTRY);
 }
 
-static void build_inv_iommu_pages(struct iommu_cmd *cmd, u64 address,
-				  size_t size, u16 domid, int pde)
+/*
+ * Builds an invalidation address which is suitable for one page or multiple
+ * pages. Sets the size bit (S) as needed is more than one page is flushed.
+ */
+static inline u64 build_inv_address(u64 address, size_t size)
 {
-	u64 pages;
-	bool s;
+	u64 pages, end, msb_diff;
 
 	pages = iommu_num_pages(address, size, PAGE_SIZE);
-	s     = false;
 
-	if (pages > 1) {
+	if (pages == 1)
+		return address & PAGE_MASK;
+
+	end = address + size - 1;
+
+	/*
+	 * msb_diff would hold the index of the most significant bit that
+	 * flipped between the start and end.
+	 */
+	msb_diff = fls64(end ^ address) - 1;
+
+	/*
+	 * Bits 63:52 are sign extended. If for some reason bit 51 is different
+	 * between the start and the end, invalidate everything.
+	 */
+	if (unlikely(msb_diff > 51)) {
+		address = CMD_INV_IOMMU_ALL_PAGES_ADDRESS;
+	} else {
 		/*
-		 * If we have to flush more than one page, flush all
-		 * TLB entries for this domain
+		 * The msb-bit must be clear on the address. Just set all the
+		 * lower bits.
 		 */
-		address = CMD_INV_IOMMU_ALL_PAGES_ADDRESS;
-		s = true;
+		address |= 1ull << (msb_diff - 1);
 	}
 
+	/* Clear bits 11:0 */
 	address &= PAGE_MASK;
 
+	/* Set the size bit - we flush more than one 4kb page */
+	return address | CMD_INV_IOMMU_PAGES_SIZE_MASK;
+}
+
+static void build_inv_iommu_pages(struct iommu_cmd *cmd, u64 address,
+				  size_t size, u16 domid, int pde)
+{
+	u64 inv_address = build_inv_address(address, size);
+
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->data[1] |= domid;
-	cmd->data[2]  = lower_32_bits(address);
-	cmd->data[3]  = upper_32_bits(address);
+	cmd->data[2]  = lower_32_bits(inv_address);
+	cmd->data[3]  = upper_32_bits(inv_address);
 	CMD_SET_TYPE(cmd, CMD_INV_IOMMU_PAGES);
-	if (s) /* size bit - we flush more than one 4kb page */
-		cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
 	if (pde) /* PDE bit - we want to flush everything, not only the PTEs */
 		cmd->data[2] |= CMD_INV_IOMMU_PAGES_PDE_MASK;
 }
@@ -961,32 +986,15 @@ static void build_inv_iommu_pages(struct iommu_cmd *cmd, u64 address,
 static void build_inv_iotlb_pages(struct iommu_cmd *cmd, u16 devid, int qdep,
 				  u64 address, size_t size)
 {
-	u64 pages;
-	bool s;
-
-	pages = iommu_num_pages(address, size, PAGE_SIZE);
-	s     = false;
-
-	if (pages > 1) {
-		/*
-		 * If we have to flush more than one page, flush all
-		 * TLB entries for this domain
-		 */
-		address = CMD_INV_IOMMU_ALL_PAGES_ADDRESS;
-		s = true;
-	}
-
-	address &= PAGE_MASK;
+	u64 inv_address = build_inv_address(address, size);
 
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->data[0]  = devid;
 	cmd->data[0] |= (qdep & 0xff) << 24;
 	cmd->data[1]  = devid;
-	cmd->data[2]  = lower_32_bits(address);
-	cmd->data[3]  = upper_32_bits(address);
+	cmd->data[2]  = lower_32_bits(inv_address);
+	cmd->data[3]  = upper_32_bits(inv_address);
 	CMD_SET_TYPE(cmd, CMD_INV_IOTLB_PAGES);
-	if (s)
-		cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
 }
 
 static void build_inv_iommu_pasid(struct iommu_cmd *cmd, u16 domid, u32 pasid,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-03-23 21:06 ` Nadav Amit
  0 siblings, 0 replies; 14+ messages in thread
From: Nadav Amit @ 2021-03-23 21:06 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon; +Cc: iommu, Nadav Amit, Jiajun Cao, linux-kernel

From: Nadav Amit <namit@vmware.com>

Currently, IOMMU invalidations and device-IOTLB invalidations using
AMD IOMMU fall back to full address-space invalidation if more than a
single page need to be flushed.

Full flushes are especially inefficient when the IOMMU is virtualized by
a hypervisor, since it requires the hypervisor to synchronize the entire
address-space.

AMD IOMMUs allow to provide a mask to perform page-specific
invalidations for multiple pages that match the address. The mask is
encoded as part of the address, and the first zero bit in the address
(in bits [51:12]) indicates the mask size.

Use this hardware feature to perform selective IOMMU and IOTLB flushes.
Combine the logic between both for better code reuse.

The IOMMU invalidations passed a smoke-test. The device IOTLB
invalidations are untested.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 76 +++++++++++++++++++++------------------
 1 file changed, 42 insertions(+), 34 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 9256f84f5ebf..5f2dc3d7f2dc 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -927,33 +927,58 @@ static void build_inv_dte(struct iommu_cmd *cmd, u16 devid)
 	CMD_SET_TYPE(cmd, CMD_INV_DEV_ENTRY);
 }
 
-static void build_inv_iommu_pages(struct iommu_cmd *cmd, u64 address,
-				  size_t size, u16 domid, int pde)
+/*
+ * Builds an invalidation address which is suitable for one page or multiple
+ * pages. Sets the size bit (S) as needed is more than one page is flushed.
+ */
+static inline u64 build_inv_address(u64 address, size_t size)
 {
-	u64 pages;
-	bool s;
+	u64 pages, end, msb_diff;
 
 	pages = iommu_num_pages(address, size, PAGE_SIZE);
-	s     = false;
 
-	if (pages > 1) {
+	if (pages == 1)
+		return address & PAGE_MASK;
+
+	end = address + size - 1;
+
+	/*
+	 * msb_diff would hold the index of the most significant bit that
+	 * flipped between the start and end.
+	 */
+	msb_diff = fls64(end ^ address) - 1;
+
+	/*
+	 * Bits 63:52 are sign extended. If for some reason bit 51 is different
+	 * between the start and the end, invalidate everything.
+	 */
+	if (unlikely(msb_diff > 51)) {
+		address = CMD_INV_IOMMU_ALL_PAGES_ADDRESS;
+	} else {
 		/*
-		 * If we have to flush more than one page, flush all
-		 * TLB entries for this domain
+		 * The msb-bit must be clear on the address. Just set all the
+		 * lower bits.
 		 */
-		address = CMD_INV_IOMMU_ALL_PAGES_ADDRESS;
-		s = true;
+		address |= 1ull << (msb_diff - 1);
 	}
 
+	/* Clear bits 11:0 */
 	address &= PAGE_MASK;
 
+	/* Set the size bit - we flush more than one 4kb page */
+	return address | CMD_INV_IOMMU_PAGES_SIZE_MASK;
+}
+
+static void build_inv_iommu_pages(struct iommu_cmd *cmd, u64 address,
+				  size_t size, u16 domid, int pde)
+{
+	u64 inv_address = build_inv_address(address, size);
+
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->data[1] |= domid;
-	cmd->data[2]  = lower_32_bits(address);
-	cmd->data[3]  = upper_32_bits(address);
+	cmd->data[2]  = lower_32_bits(inv_address);
+	cmd->data[3]  = upper_32_bits(inv_address);
 	CMD_SET_TYPE(cmd, CMD_INV_IOMMU_PAGES);
-	if (s) /* size bit - we flush more than one 4kb page */
-		cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
 	if (pde) /* PDE bit - we want to flush everything, not only the PTEs */
 		cmd->data[2] |= CMD_INV_IOMMU_PAGES_PDE_MASK;
 }
@@ -961,32 +986,15 @@ static void build_inv_iommu_pages(struct iommu_cmd *cmd, u64 address,
 static void build_inv_iotlb_pages(struct iommu_cmd *cmd, u16 devid, int qdep,
 				  u64 address, size_t size)
 {
-	u64 pages;
-	bool s;
-
-	pages = iommu_num_pages(address, size, PAGE_SIZE);
-	s     = false;
-
-	if (pages > 1) {
-		/*
-		 * If we have to flush more than one page, flush all
-		 * TLB entries for this domain
-		 */
-		address = CMD_INV_IOMMU_ALL_PAGES_ADDRESS;
-		s = true;
-	}
-
-	address &= PAGE_MASK;
+	u64 inv_address = build_inv_address(address, size);
 
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->data[0]  = devid;
 	cmd->data[0] |= (qdep & 0xff) << 24;
 	cmd->data[1]  = devid;
-	cmd->data[2]  = lower_32_bits(address);
-	cmd->data[3]  = upper_32_bits(address);
+	cmd->data[2]  = lower_32_bits(inv_address);
+	cmd->data[3]  = upper_32_bits(inv_address);
 	CMD_SET_TYPE(cmd, CMD_INV_IOTLB_PAGES);
-	if (s)
-		cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
 }
 
 static void build_inv_iommu_pasid(struct iommu_cmd *cmd, u16 domid, u32 pasid,
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
  2021-03-23 21:06 ` Nadav Amit
@ 2021-04-07 10:01   ` Joerg Roedel
  -1 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-07 10:01 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Will Deacon, Nadav Amit, Jiajun Cao, iommu, linux-kernel

On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
> From: Nadav Amit <namit@vmware.com>
> 
> Currently, IOMMU invalidations and device-IOTLB invalidations using
> AMD IOMMU fall back to full address-space invalidation if more than a
> single page need to be flushed.
> 
> Full flushes are especially inefficient when the IOMMU is virtualized by
> a hypervisor, since it requires the hypervisor to synchronize the entire
> address-space.
> 
> AMD IOMMUs allow to provide a mask to perform page-specific
> invalidations for multiple pages that match the address. The mask is
> encoded as part of the address, and the first zero bit in the address
> (in bits [51:12]) indicates the mask size.
> 
> Use this hardware feature to perform selective IOMMU and IOTLB flushes.
> Combine the logic between both for better code reuse.
> 
> The IOMMU invalidations passed a smoke-test. The device IOTLB
> invalidations are untested.

Have you thoroughly tested this on real hardware? I had a patch-set
doing the same many years ago and it lead to data corruption under load.
Back then it could have been a bug in my code of course, but it made me
cautious about using targeted invalidations.

Regards,

	Joerg


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-04-07 10:01   ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-07 10:01 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Nadav Amit, iommu, Will Deacon, Jiajun Cao, linux-kernel

On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
> From: Nadav Amit <namit@vmware.com>
> 
> Currently, IOMMU invalidations and device-IOTLB invalidations using
> AMD IOMMU fall back to full address-space invalidation if more than a
> single page need to be flushed.
> 
> Full flushes are especially inefficient when the IOMMU is virtualized by
> a hypervisor, since it requires the hypervisor to synchronize the entire
> address-space.
> 
> AMD IOMMUs allow to provide a mask to perform page-specific
> invalidations for multiple pages that match the address. The mask is
> encoded as part of the address, and the first zero bit in the address
> (in bits [51:12]) indicates the mask size.
> 
> Use this hardware feature to perform selective IOMMU and IOTLB flushes.
> Combine the logic between both for better code reuse.
> 
> The IOMMU invalidations passed a smoke-test. The device IOTLB
> invalidations are untested.

Have you thoroughly tested this on real hardware? I had a patch-set
doing the same many years ago and it lead to data corruption under load.
Back then it could have been a bug in my code of course, but it made me
cautious about using targeted invalidations.

Regards,

	Joerg

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
  2021-04-07 10:01   ` Joerg Roedel
@ 2021-04-07 17:57     ` Nadav Amit
  -1 siblings, 0 replies; 14+ messages in thread
From: Nadav Amit @ 2021-04-07 17:57 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel



> On Apr 7, 2021, at 3:01 AM, Joerg Roedel <joro@8bytes.org> wrote:
> 
> On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
>> From: Nadav Amit <namit@vmware.com>
>> 
>> Currently, IOMMU invalidations and device-IOTLB invalidations using
>> AMD IOMMU fall back to full address-space invalidation if more than a
>> single page need to be flushed.
>> 
>> Full flushes are especially inefficient when the IOMMU is virtualized by
>> a hypervisor, since it requires the hypervisor to synchronize the entire
>> address-space.
>> 
>> AMD IOMMUs allow to provide a mask to perform page-specific
>> invalidations for multiple pages that match the address. The mask is
>> encoded as part of the address, and the first zero bit in the address
>> (in bits [51:12]) indicates the mask size.
>> 
>> Use this hardware feature to perform selective IOMMU and IOTLB flushes.
>> Combine the logic between both for better code reuse.
>> 
>> The IOMMU invalidations passed a smoke-test. The device IOTLB
>> invalidations are untested.
> 
> Have you thoroughly tested this on real hardware? I had a patch-set
> doing the same many years ago and it lead to data corruption under load.
> Back then it could have been a bug in my code of course, but it made me
> cautious about using targeted invalidations.

I tested it on real bare-metal hardware. I ran some basic I/O workloads
with the IOMMU enabled, checkers enabled/disabled, and so on.

However, I only tested the IOMMU-flushes and I did not test that the
device-IOTLB flush work, since I did not have the hardware for that.

If you can refer me to the old patches, I will have a look and see
whether I can see a difference in the logic or test them. If you want
me to run different tests - let me know. If you want me to remove
the device-IOTLB invalidations logic - that is also fine with me.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-04-07 17:57     ` Nadav Amit
  0 siblings, 0 replies; 14+ messages in thread
From: Nadav Amit @ 2021-04-07 17:57 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: iommu, Will Deacon, Jiajun Cao, linux-kernel



> On Apr 7, 2021, at 3:01 AM, Joerg Roedel <joro@8bytes.org> wrote:
> 
> On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
>> From: Nadav Amit <namit@vmware.com>
>> 
>> Currently, IOMMU invalidations and device-IOTLB invalidations using
>> AMD IOMMU fall back to full address-space invalidation if more than a
>> single page need to be flushed.
>> 
>> Full flushes are especially inefficient when the IOMMU is virtualized by
>> a hypervisor, since it requires the hypervisor to synchronize the entire
>> address-space.
>> 
>> AMD IOMMUs allow to provide a mask to perform page-specific
>> invalidations for multiple pages that match the address. The mask is
>> encoded as part of the address, and the first zero bit in the address
>> (in bits [51:12]) indicates the mask size.
>> 
>> Use this hardware feature to perform selective IOMMU and IOTLB flushes.
>> Combine the logic between both for better code reuse.
>> 
>> The IOMMU invalidations passed a smoke-test. The device IOTLB
>> invalidations are untested.
> 
> Have you thoroughly tested this on real hardware? I had a patch-set
> doing the same many years ago and it lead to data corruption under load.
> Back then it could have been a bug in my code of course, but it made me
> cautious about using targeted invalidations.

I tested it on real bare-metal hardware. I ran some basic I/O workloads
with the IOMMU enabled, checkers enabled/disabled, and so on.

However, I only tested the IOMMU-flushes and I did not test that the
device-IOTLB flush work, since I did not have the hardware for that.

If you can refer me to the old patches, I will have a look and see
whether I can see a difference in the logic or test them. If you want
me to run different tests - let me know. If you want me to remove
the device-IOTLB invalidations logic - that is also fine with me.

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
  2021-04-07 17:57     ` Nadav Amit
@ 2021-04-08  7:18       ` Joerg Roedel
  -1 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-08  7:18 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel

Hi Nadav,

On Wed, Apr 07, 2021 at 05:57:31PM +0000, Nadav Amit wrote:
> I tested it on real bare-metal hardware. I ran some basic I/O workloads
> with the IOMMU enabled, checkers enabled/disabled, and so on.
> 
> However, I only tested the IOMMU-flushes and I did not test that the
> device-IOTLB flush work, since I did not have the hardware for that.
> 
> If you can refer me to the old patches, I will have a look and see
> whether I can see a difference in the logic or test them. If you want
> me to run different tests - let me know. If you want me to remove
> the device-IOTLB invalidations logic - that is also fine with me.

Here is the patch-set, it is from 2010 and against a very old version of
the AMD IOMMU driver:

	https://lore.kernel.org/lkml/1265898797-32183-1-git-send-email-joerg.roedel@amd.com/

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-04-08  7:18       ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-08  7:18 UTC (permalink / raw)
  To: Nadav Amit; +Cc: iommu, Will Deacon, Jiajun Cao, linux-kernel

Hi Nadav,

On Wed, Apr 07, 2021 at 05:57:31PM +0000, Nadav Amit wrote:
> I tested it on real bare-metal hardware. I ran some basic I/O workloads
> with the IOMMU enabled, checkers enabled/disabled, and so on.
> 
> However, I only tested the IOMMU-flushes and I did not test that the
> device-IOTLB flush work, since I did not have the hardware for that.
> 
> If you can refer me to the old patches, I will have a look and see
> whether I can see a difference in the logic or test them. If you want
> me to run different tests - let me know. If you want me to remove
> the device-IOTLB invalidations logic - that is also fine with me.

Here is the patch-set, it is from 2010 and against a very old version of
the AMD IOMMU driver:

	https://lore.kernel.org/lkml/1265898797-32183-1-git-send-email-joerg.roedel@amd.com/

Regards,

	Joerg
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
  2021-04-08  7:18       ` Joerg Roedel
@ 2021-04-08 10:29         ` Nadav Amit
  -1 siblings, 0 replies; 14+ messages in thread
From: Nadav Amit @ 2021-04-08 10:29 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel


> On Apr 8, 2021, at 12:18 AM, Joerg Roedel <joro@8bytes.org> wrote:
> 
> Hi Nadav,
> 
> On Wed, Apr 07, 2021 at 05:57:31PM +0000, Nadav Amit wrote:
>> I tested it on real bare-metal hardware. I ran some basic I/O workloads
>> with the IOMMU enabled, checkers enabled/disabled, and so on.
>> 
>> However, I only tested the IOMMU-flushes and I did not test that the
>> device-IOTLB flush work, since I did not have the hardware for that.
>> 
>> If you can refer me to the old patches, I will have a look and see
>> whether I can see a difference in the logic or test them. If you want
>> me to run different tests - let me know. If you want me to remove
>> the device-IOTLB invalidations logic - that is also fine with me.
> 
> Here is the patch-set, it is from 2010 and against a very old version of
> the AMD IOMMU driver:

Thanks. I looked at your code and I see a difference between the
implementations.

As far as I understand, pages are always assumed to be aligned to their
own sizes. I therefore assume that flushes should regard the lower bits
as a “mask” and not just as encoding of the size.

In the version that you referred me to, iommu_update_domain_tlb() only
regards the size of the region to be flushed and disregards the
alignment:

+	order   = get_order(domain->flush.end - domain->flush.start);
+	mask    = (0x1000ULL << order) - 1;
+	address = ((domain->flush.start & ~mask) | (mask >> 1)) & ~0xfffULL;


If you need to flush for instance the region between 0x1000-0x5000, this
version would use the address|mask of 0x1000 (16KB page). The version I
sent regards the alignment, and since the range is not aligned would use
address|mask of 0x3000 (32KB page).

IIUC, IOVA allocations today are aligned in such way, but at least in
the past (looking on 3.19 for the matter), it was not like always like
that, which can explain the problems.

Thoughts?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-04-08 10:29         ` Nadav Amit
  0 siblings, 0 replies; 14+ messages in thread
From: Nadav Amit @ 2021-04-08 10:29 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: iommu, Will Deacon, Jiajun Cao, linux-kernel


> On Apr 8, 2021, at 12:18 AM, Joerg Roedel <joro@8bytes.org> wrote:
> 
> Hi Nadav,
> 
> On Wed, Apr 07, 2021 at 05:57:31PM +0000, Nadav Amit wrote:
>> I tested it on real bare-metal hardware. I ran some basic I/O workloads
>> with the IOMMU enabled, checkers enabled/disabled, and so on.
>> 
>> However, I only tested the IOMMU-flushes and I did not test that the
>> device-IOTLB flush work, since I did not have the hardware for that.
>> 
>> If you can refer me to the old patches, I will have a look and see
>> whether I can see a difference in the logic or test them. If you want
>> me to run different tests - let me know. If you want me to remove
>> the device-IOTLB invalidations logic - that is also fine with me.
> 
> Here is the patch-set, it is from 2010 and against a very old version of
> the AMD IOMMU driver:

Thanks. I looked at your code and I see a difference between the
implementations.

As far as I understand, pages are always assumed to be aligned to their
own sizes. I therefore assume that flushes should regard the lower bits
as a “mask” and not just as encoding of the size.

In the version that you referred me to, iommu_update_domain_tlb() only
regards the size of the region to be flushed and disregards the
alignment:

+	order   = get_order(domain->flush.end - domain->flush.start);
+	mask    = (0x1000ULL << order) - 1;
+	address = ((domain->flush.start & ~mask) | (mask >> 1)) & ~0xfffULL;


If you need to flush for instance the region between 0x1000-0x5000, this
version would use the address|mask of 0x1000 (16KB page). The version I
sent regards the alignment, and since the range is not aligned would use
address|mask of 0x3000 (32KB page).

IIUC, IOVA allocations today are aligned in such way, but at least in
the past (looking on 3.19 for the matter), it was not like always like
that, which can explain the problems.

Thoughts?
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
  2021-04-08 10:29         ` Nadav Amit
@ 2021-04-08 13:23           ` Joerg Roedel
  -1 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-08 13:23 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel

On Thu, Apr 08, 2021 at 10:29:25AM +0000, Nadav Amit wrote:
> In the version that you referred me to, iommu_update_domain_tlb() only
> regards the size of the region to be flushed and disregards the
> alignment:
> 
> +	order   = get_order(domain->flush.end - domain->flush.start);
> +	mask    = (0x1000ULL << order) - 1;
> +	address = ((domain->flush.start & ~mask) | (mask >> 1)) & ~0xfffULL;
> 
> 
> If you need to flush for instance the region between 0x1000-0x5000, this
> version would use the address|mask of 0x1000 (16KB page). The version I
> sent regards the alignment, and since the range is not aligned would use
> address|mask of 0x3000 (32KB page).
> 
> IIUC, IOVA allocations today are aligned in such way, but at least in
> the past (looking on 3.19 for the matter), it was not like always like
> that, which can explain the problems.

Yeah, that make sense and explains the data corruption problems. I will
give your patch a try on one of my test machines and consider it for
v5.13 if all goes well.

Thanks,

	Joerg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-04-08 13:23           ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-08 13:23 UTC (permalink / raw)
  To: Nadav Amit; +Cc: iommu, Will Deacon, Jiajun Cao, linux-kernel

On Thu, Apr 08, 2021 at 10:29:25AM +0000, Nadav Amit wrote:
> In the version that you referred me to, iommu_update_domain_tlb() only
> regards the size of the region to be flushed and disregards the
> alignment:
> 
> +	order   = get_order(domain->flush.end - domain->flush.start);
> +	mask    = (0x1000ULL << order) - 1;
> +	address = ((domain->flush.start & ~mask) | (mask >> 1)) & ~0xfffULL;
> 
> 
> If you need to flush for instance the region between 0x1000-0x5000, this
> version would use the address|mask of 0x1000 (16KB page). The version I
> sent regards the alignment, and since the range is not aligned would use
> address|mask of 0x3000 (32KB page).
> 
> IIUC, IOVA allocations today are aligned in such way, but at least in
> the past (looking on 3.19 for the matter), it was not like always like
> that, which can explain the problems.

Yeah, that make sense and explains the data corruption problems. I will
give your patch a try on one of my test machines and consider it for
v5.13 if all goes well.

Thanks,

	Joerg
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
  2021-03-23 21:06 ` Nadav Amit
@ 2021-04-08 15:16   ` Joerg Roedel
  -1 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-08 15:16 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Will Deacon, Nadav Amit, Jiajun Cao, iommu, linux-kernel

On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
>  drivers/iommu/amd/iommu.c | 76 +++++++++++++++++++++------------------
>  1 file changed, 42 insertions(+), 34 deletions(-)

Load-testing looks good here too, so this patch is queued now for v5.13,
thanks Nadav.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] iommu/amd: page-specific invalidations for more than one page
@ 2021-04-08 15:16   ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2021-04-08 15:16 UTC (permalink / raw)
  To: Nadav Amit; +Cc: Nadav Amit, iommu, Will Deacon, Jiajun Cao, linux-kernel

On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
>  drivers/iommu/amd/iommu.c | 76 +++++++++++++++++++++------------------
>  1 file changed, 42 insertions(+), 34 deletions(-)

Load-testing looks good here too, so this patch is queued now for v5.13,
thanks Nadav.

Regards,

	Joerg
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-04-08 15:16 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-23 21:06 [PATCH] iommu/amd: page-specific invalidations for more than one page Nadav Amit
2021-03-23 21:06 ` Nadav Amit
2021-04-07 10:01 ` Joerg Roedel
2021-04-07 10:01   ` Joerg Roedel
2021-04-07 17:57   ` Nadav Amit
2021-04-07 17:57     ` Nadav Amit
2021-04-08  7:18     ` Joerg Roedel
2021-04-08  7:18       ` Joerg Roedel
2021-04-08 10:29       ` Nadav Amit
2021-04-08 10:29         ` Nadav Amit
2021-04-08 13:23         ` Joerg Roedel
2021-04-08 13:23           ` Joerg Roedel
2021-04-08 15:16 ` Joerg Roedel
2021-04-08 15:16   ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.