All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tom Murphy <tmurphy@arista.com>
To: iommu@lists.linux-foundation.org
Cc: dima@arista.com, jamessewart@arista.com, murphyt7@tcd.ie,
	Tom Murphy <tmurphy@arista.com>, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will.deacon@arm.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Kukjin Kim <kgene@kernel.org>,
	Krzysztof Kozlowski <krzk@kernel.org>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Andy Gross <andy.gross@linaro.org>,
	David Brown <david.brown@linaro.org>,
	Rob Clark <robdclark@gmail.com>, Heiko Stuebner <heiko@sntech.de>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org, linux-rockchi
Subject: [PATCH 6/9] iommu/amd: Implement map_atomic
Date: Thu, 11 Apr 2019 19:47:35 +0100	[thread overview]
Message-ID: <20190411184741.27540-7-tmurphy@arista.com> (raw)
In-Reply-To: <20190411184741.27540-1-tmurphy@arista.com>

Instead of using a spin lock I removed the mutex lock from both the
amd_iommu_map and amd_iommu_unmap path as well. iommu_map doesn’t lock
while mapping and so if iommu_map is called by two different threads on
the same iova region it results in a race condition even with the locks.
So the locking in amd_iommu_map and amd_iommu_unmap doesn't add any real
protection. The solution to this is for whatever manages the allocated
iova’s externally to make sure iommu_map isn’t called twice on the same
region at the same time.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/amd_iommu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 2d4ee10626b4..b45e0e033adc 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3089,12 +3089,12 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 	return ret;
 }
 
-static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
-			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+static int __amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot,
+			 gfp_t gfp)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	int prot = 0;
-	int ret;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return -EINVAL;
@@ -3104,11 +3104,21 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
-	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
+	return iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
+}
 
-	return ret;
+static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_KERNEL);
+}
+
+static int amd_iommu_map_atomic(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_ATOMIC);
 }
 
 static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
@@ -3262,6 +3272,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.attach_dev = amd_iommu_attach_device,
 	.detach_dev = amd_iommu_detach_device,
 	.map = amd_iommu_map,
+	.map_atomic = amd_iommu_map_atomic,
 	.unmap = amd_iommu_unmap,
 	.iova_to_phys = amd_iommu_iova_to_phys,
 	.add_device = amd_iommu_add_device,
-- 
2.17.1

WARNING: multiple messages have this Message-ID (diff)
From: Tom Murphy <tmurphy@arista.com>
To: iommu@lists.linux-foundation.org
Cc: dima@arista.com, jamessewart@arista.com, murphyt7@tcd.ie,
	Tom Murphy <tmurphy@arista.com>, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will.deacon@arm.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Kukjin Kim <kgene@kernel.org>,
	Krzysztof Kozlowski <krzk@kernel.org>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Andy Gross <andy.gross@linaro.org>,
	David Brown <david.brown@linaro.org>,
	Rob Clark <robdclark@gmail.com>, Heiko Stuebner <heiko@sntech.de>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-rockchip@lists.infradead.org
Subject: [PATCH 6/9] iommu/amd: Implement map_atomic
Date: Thu, 11 Apr 2019 19:47:35 +0100	[thread overview]
Message-ID: <20190411184741.27540-7-tmurphy@arista.com> (raw)
Message-ID: <20190411184735.hehHs5LxfPutk12Rvi7AbO_SzW08ROHvu64rOMIIyEE@z> (raw)
In-Reply-To: <20190411184741.27540-1-tmurphy@arista.com>

Instead of using a spin lock I removed the mutex lock from both the
amd_iommu_map and amd_iommu_unmap path as well. iommu_map doesn’t lock
while mapping and so if iommu_map is called by two different threads on
the same iova region it results in a race condition even with the locks.
So the locking in amd_iommu_map and amd_iommu_unmap doesn't add any real
protection. The solution to this is for whatever manages the allocated
iova’s externally to make sure iommu_map isn’t called twice on the same
region at the same time.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/amd_iommu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 2d4ee10626b4..b45e0e033adc 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3089,12 +3089,12 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 	return ret;
 }
 
-static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
-			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+static int __amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot,
+			 gfp_t gfp)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	int prot = 0;
-	int ret;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return -EINVAL;
@@ -3104,11 +3104,21 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
-	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
+	return iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
+}
 
-	return ret;
+static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_KERNEL);
+}
+
+static int amd_iommu_map_atomic(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_ATOMIC);
 }
 
 static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
@@ -3262,6 +3272,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.attach_dev = amd_iommu_attach_device,
 	.detach_dev = amd_iommu_detach_device,
 	.map = amd_iommu_map,
+	.map_atomic = amd_iommu_map_atomic,
 	.unmap = amd_iommu_unmap,
 	.iova_to_phys = amd_iommu_iova_to_phys,
 	.add_device = amd_iommu_add_device,
-- 
2.17.1


WARNING: multiple messages have this Message-ID (diff)
From: Tom Murphy via iommu <iommu@lists.linux-foundation.org>
To: iommu@lists.linux-foundation.org
Cc: Heiko Stuebner <heiko@sntech.de>,
	Will Deacon <will.deacon@arm.com>,
	David Brown <david.brown@linaro.org>,
	linux-samsung-soc@vger.kernel.org, dima@arista.com,
	Krzysztof Kozlowski <krzk@kernel.org>,
	linux-rockchip@lists.infradead.org, Kukjin Kim <kgene@kernel.org>,
	Andy Gross <andy.gross@linaro.org>,
	Marc Zyngier <marc.zyngier@arm.com>,
	linux-arm-msm@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	Tom Murphy <tmurphy@arista.com>,
	linux-kernel@vger.kernel.org, murphyt7@tcd.ie,
	Robin Murphy <robin.murphy@arm.com>
Subject: [PATCH 6/9] iommu/amd: Implement map_atomic
Date: Thu, 11 Apr 2019 19:47:35 +0100	[thread overview]
Message-ID: <20190411184741.27540-7-tmurphy@arista.com> (raw)
Message-ID: <20190411184735._bVhKj3SPP0dAsT488G5WLG1YgpJru0JRRX3UWZNcKo@z> (raw)
In-Reply-To: <20190411184741.27540-1-tmurphy@arista.com>

Instead of using a spin lock I removed the mutex lock from both the
amd_iommu_map and amd_iommu_unmap path as well. iommu_map doesn’t lock
while mapping and so if iommu_map is called by two different threads on
the same iova region it results in a race condition even with the locks.
So the locking in amd_iommu_map and amd_iommu_unmap doesn't add any real
protection. The solution to this is for whatever manages the allocated
iova’s externally to make sure iommu_map isn’t called twice on the same
region at the same time.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/amd_iommu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 2d4ee10626b4..b45e0e033adc 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3089,12 +3089,12 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 	return ret;
 }
 
-static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
-			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+static int __amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot,
+			 gfp_t gfp)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	int prot = 0;
-	int ret;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return -EINVAL;
@@ -3104,11 +3104,21 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
-	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
+	return iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
+}
 
-	return ret;
+static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_KERNEL);
+}
+
+static int amd_iommu_map_atomic(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_ATOMIC);
 }
 
 static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
@@ -3262,6 +3272,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.attach_dev = amd_iommu_attach_device,
 	.detach_dev = amd_iommu_detach_device,
 	.map = amd_iommu_map,
+	.map_atomic = amd_iommu_map_atomic,
 	.unmap = amd_iommu_unmap,
 	.iova_to_phys = amd_iommu_iova_to_phys,
 	.add_device = amd_iommu_add_device,
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Tom Murphy <tmurphy@arista.com>
To: iommu@lists.linux-foundation.org
Cc: Heiko Stuebner <heiko@sntech.de>,
	jamessewart@arista.com, Will Deacon <will.deacon@arm.com>,
	David Brown <david.brown@linaro.org>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	linux-samsung-soc@vger.kernel.org, dima@arista.com,
	Joerg Roedel <joro@8bytes.org>,
	Krzysztof Kozlowski <krzk@kernel.org>,
	linux-rockchip@lists.infradead.org, Kukjin Kim <kgene@kernel.org>,
	Andy Gross <andy.gross@linaro.org>,
	Marc Zyngier <marc.zyngier@arm.com>,
	linux-arm-msm@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	Tom Murphy <tmurphy@arista.com>,
	linux-kernel@vger.kernel.org, murphyt7@tcd.ie,
	Rob Clark <robdclark@gmail.com>,
	Robin Murphy <robin.murphy@arm.com>
Subject: [PATCH 6/9] iommu/amd: Implement map_atomic
Date: Thu, 11 Apr 2019 19:47:35 +0100	[thread overview]
Message-ID: <20190411184741.27540-7-tmurphy@arista.com> (raw)
In-Reply-To: <20190411184741.27540-1-tmurphy@arista.com>

Instead of using a spin lock I removed the mutex lock from both the
amd_iommu_map and amd_iommu_unmap path as well. iommu_map doesn’t lock
while mapping and so if iommu_map is called by two different threads on
the same iova region it results in a race condition even with the locks.
So the locking in amd_iommu_map and amd_iommu_unmap doesn't add any real
protection. The solution to this is for whatever manages the allocated
iova’s externally to make sure iommu_map isn’t called twice on the same
region at the same time.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/amd_iommu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 2d4ee10626b4..b45e0e033adc 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3089,12 +3089,12 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 	return ret;
 }
 
-static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
-			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+static int __amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot,
+			 gfp_t gfp)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	int prot = 0;
-	int ret;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return -EINVAL;
@@ -3104,11 +3104,21 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
-	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
+	return iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
+}
 
-	return ret;
+static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_KERNEL);
+}
+
+static int amd_iommu_map_atomic(struct iommu_domain *dom, unsigned long iova,
+			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+{
+	return __amd_iommu_map(dom, iova, paddr, page_size, iommu_prot,
+			GFP_ATOMIC);
 }
 
 static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
@@ -3262,6 +3272,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.attach_dev = amd_iommu_attach_device,
 	.detach_dev = amd_iommu_detach_device,
 	.map = amd_iommu_map,
+	.map_atomic = amd_iommu_map_atomic,
 	.unmap = amd_iommu_unmap,
 	.iova_to_phys = amd_iommu_iova_to_phys,
 	.add_device = amd_iommu_add_device,
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2019-04-11 18:47 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-11 18:47 [PATCH 0/9] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
2019-04-11 18:47 ` Tom Murphy
2019-04-11 18:47 ` Tom Murphy via iommu
2019-04-11 18:47 ` Tom Murphy
2019-04-11 18:47 ` [PATCH 1/9] iommu/dma-iommu: Add iommu_map_atomic Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-2-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:30     ` Christoph Hellwig
2019-04-15  6:30       ` Christoph Hellwig
2019-04-15  6:30       ` Christoph Hellwig
2019-04-15  6:30       ` Christoph Hellwig
2019-04-11 18:47 ` [PATCH 2/9] iommu/dma-iommu: Add function to flush any cached not present IOTLB entries Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-16 14:00   ` Robin Murphy
2019-04-16 14:00     ` Robin Murphy
2019-04-16 14:00     ` Robin Murphy
     [not found]     ` <82ce70dc-b370-3bb0-bce8-2d32db4d6a0d-5wv7dgnIgG8@public.gmane.org>
2019-04-16 16:40       ` Tom Murphy via iommu
2019-04-16 16:40         ` Tom Murphy
2019-04-16 16:40         ` Tom Murphy via iommu
2019-04-16 16:40         ` Tom Murphy
2019-04-11 18:47 ` [PATCH 3/9] iommu/dma-iommu: Add iommu_dma_copy_reserved_iova, iommu_dma_apply_resv_region to the dma-iommu api Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-4-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:31     ` Christoph Hellwig
2019-04-15  6:31       ` Christoph Hellwig
2019-04-15  6:31       ` Christoph Hellwig
2019-04-15  6:31       ` Christoph Hellwig
2019-04-16 13:22       ` Tom Murphy
2019-04-16 13:22         ` Tom Murphy
2019-04-16 13:22         ` Tom Murphy via iommu
2019-04-16 13:37         ` Robin Murphy
2019-04-16 13:37           ` Robin Murphy
2019-04-16 13:37           ` Robin Murphy
2019-04-11 18:47 ` [PATCH 4/9] iommu/dma-iommu: Add iommu_dma_map_page_coherent Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-15  6:33   ` Christoph Hellwig
2019-04-15  6:33     ` Christoph Hellwig
2019-04-15  6:33     ` Christoph Hellwig
2019-04-11 18:47 ` [PATCH 5/9] iommu/amd: Implement .flush_np_cache Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-6-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:33     ` Christoph Hellwig
2019-04-15  6:33       ` Christoph Hellwig
2019-04-15  6:33       ` Christoph Hellwig
2019-04-15  6:33       ` Christoph Hellwig
2019-04-15 18:18       ` Tom Murphy
2019-04-15 18:18         ` Tom Murphy
2019-04-15 18:18         ` Tom Murphy via iommu
2019-04-11 18:47 ` Tom Murphy [this message]
2019-04-11 18:47   ` [PATCH 6/9] iommu/amd: Implement map_atomic Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-16 14:13   ` Robin Murphy
2019-04-16 14:13     ` Robin Murphy
2019-04-16 14:13     ` Robin Murphy
2019-04-11 18:47 ` [PATCH 7/9] iommu/amd: Use the dma-iommu api Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47 ` [PATCH 8/9] iommu/amd: Clean up unused functions Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-9-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:22     ` Christoph Hellwig
2019-04-15  6:22       ` Christoph Hellwig
2019-04-15  6:22       ` Christoph Hellwig
2019-04-15  6:22       ` Christoph Hellwig
2019-04-11 18:47 ` [PATCH 9/9] iommu/amd: Add allocated domain to global list earlier Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-10-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:23     ` Christoph Hellwig
2019-04-15  6:23       ` Christoph Hellwig
2019-04-15  6:23       ` Christoph Hellwig
2019-04-15  6:23       ` Christoph Hellwig
2019-04-15 18:06       ` Tom Murphy
2019-04-15 18:06         ` Tom Murphy
2019-04-15 18:06         ` Tom Murphy via iommu
     [not found] ` <20190411184741.27540-1-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:18   ` [PATCH 0/9] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Christoph Hellwig
2019-04-15  6:18     ` Christoph Hellwig
2019-04-15  6:18     ` Christoph Hellwig
2019-04-15  6:18     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190411184741.27540-7-tmurphy@arista.com \
    --to=tmurphy@arista.com \
    --cc=andy.gross@linaro.org \
    --cc=david.brown@linaro.org \
    --cc=dima@arista.com \
    --cc=heiko@sntech.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jamessewart@arista.com \
    --cc=joro@8bytes.org \
    --cc=kgene@kernel.org \
    --cc=krzk@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-samsung-soc@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=marc.zyngier@arm.com \
    --cc=matthias.bgg@gmail.com \
    --cc=murphyt7@tcd.ie \
    --cc=robdclark@gmail.com \
    --cc=robin.murphy@arm.com \
    --cc=tglx@linutronix.de \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.