All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities
@ 2015-01-29 12:24 GAUGUEY Rémy 228890
  2015-01-29 16:13   ` Alex Williamson
  0 siblings, 1 reply; 5+ messages in thread
From: GAUGUEY Rémy 228890 @ 2015-01-29 12:24 UTC (permalink / raw)
  To: Antonios Motakis, kvmarm, iommu, alex.williamson
  Cc: open list:VFIO DRIVER, will.deacon, open list, tech

Hi Antonios, 

On Thu, 27 Nov 2014 18:22:53 Antonios Motakis wrote:
>VFIO_IOMMU_TYPE1 keeps track for each domain it knows a list of protection
>flags it always applies to all mappings in the domain. This is used for
>domains that support IOMMU_CAP_CACHE_COHERENCY.
>
>Refactor this slightly, by keeping track instead that a given domain
>supports the capability, and applying the IOMMU_CACHE protection flag when
>doing the actual DMA mappings.
>
>This will allow us to reuse the behavior for IOMMU_CAP_NOEXEC, which we
>also want to keep track of, but without applying it to all domains that
>support it unless the user explicitly requests it.
>
>Signed-off-by: Antonios Motakis <a.motakis@xxxxxxxxxxxxxxxxxxxxxx>
>---
> drivers/vfio/vfio_iommu_type1.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
>diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>index 4a9d666..c54dab8 100644
>--- a/drivers/vfio/vfio_iommu_type1.c
>+++ b/drivers/vfio/vfio_iommu_type1.c
> 			if (ret)
> 				return ret;
> 
>@@ -731,7 +740,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> 	}
> 
> 	if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
>-		domain->prot |= IOMMU_CACHE;
>+		domain->caps |= IOMMU_CAP_CACHE_COHERENCY;

IMHO this is not good since IOMMU_CAP_CACHE_COHERENCY is not a bitfield but an enum 
See include/linux/iommu.h 
enum iommu_cap {
	IOMMU_CAP_CACHE_COHERENCY,	/* IOMMU can enforce cache coherent DMA
					   transactions */
	IOMMU_CAP_INTR_REMAP,		/* IOMMU supports interrupt isolation */
	IOMMU_CAP_NOEXEC,		/* IOMMU_NOEXEC flag */
};
One possible fix would to redefine the enum with values with bitfiled values
enum iommu_cap {
	IOMMU_CAP_CACHE_COHERENCY = 1, /* IOMMU can enforce cache coherent DMA
					   transactions */
	IOMMU_CAP_INTR_REMAP = 2,	/* IOMMU supports interrupt isolation */
	IOMMU_CAP_NOEXEC = 4,		/* IOMMU_NOEXEC flag */
};

Regards
Rémy

>_______________________________________________
>kvmarm mailing list
>kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities
@ 2015-01-29 16:13   ` Alex Williamson
  0 siblings, 0 replies; 5+ messages in thread
From: Alex Williamson @ 2015-01-29 16:13 UTC (permalink / raw)
  To: GAUGUEY Rémy 228890
  Cc: Antonios Motakis, kvmarm, iommu, open list:VFIO DRIVER,
	will.deacon, open list, tech

On Thu, 2015-01-29 at 12:24 +0000, GAUGUEY Rémy 228890 wrote:
> Hi Antonios, 
> 
> On Thu, 27 Nov 2014 18:22:53 Antonios Motakis wrote:
> >VFIO_IOMMU_TYPE1 keeps track for each domain it knows a list of protection
> >flags it always applies to all mappings in the domain. This is used for
> >domains that support IOMMU_CAP_CACHE_COHERENCY.
> >
> >Refactor this slightly, by keeping track instead that a given domain
> >supports the capability, and applying the IOMMU_CACHE protection flag when
> >doing the actual DMA mappings.
> >
> >This will allow us to reuse the behavior for IOMMU_CAP_NOEXEC, which we
> >also want to keep track of, but without applying it to all domains that
> >support it unless the user explicitly requests it.
> >
> >Signed-off-by: Antonios Motakis <a.motakis@xxxxxxxxxxxxxxxxxxxxxx>
> >---
> > drivers/vfio/vfio_iommu_type1.c | 25 +++++++++++++++++--------
> > 1 file changed, 17 insertions(+), 8 deletions(-)
> >
> >diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >index 4a9d666..c54dab8 100644
> >--- a/drivers/vfio/vfio_iommu_type1.c
> >+++ b/drivers/vfio/vfio_iommu_type1.c
> > 			if (ret)
> > 				return ret;
> > 
> >@@ -731,7 +740,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> > 	}
> > 
> > 	if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
> >-		domain->prot |= IOMMU_CACHE;
> >+		domain->caps |= IOMMU_CAP_CACHE_COHERENCY;
> 
> IMHO this is not good since IOMMU_CAP_CACHE_COHERENCY is not a bitfield but an enum 
> See include/linux/iommu.h 
> enum iommu_cap {
> 	IOMMU_CAP_CACHE_COHERENCY,	/* IOMMU can enforce cache coherent DMA
> 					   transactions */
> 	IOMMU_CAP_INTR_REMAP,		/* IOMMU supports interrupt isolation */
> 	IOMMU_CAP_NOEXEC,		/* IOMMU_NOEXEC flag */
> };

Good catch!

> One possible fix would to redefine the enum with values with bitfiled values
> enum iommu_cap {
> 	IOMMU_CAP_CACHE_COHERENCY = 1, /* IOMMU can enforce cache coherent DMA
> 					   transactions */
> 	IOMMU_CAP_INTR_REMAP = 2,	/* IOMMU supports interrupt isolation */
> 	IOMMU_CAP_NOEXEC = 4,		/* IOMMU_NOEXEC flag */
> };

This seems like a large imposition on the IOMMU code to conform to our
specific use case.  Perhaps the method used by the original code to
cache domain overriding mapping flags separately is the better approach.
Thanks,

Alex


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities
@ 2015-01-29 16:13   ` Alex Williamson
  0 siblings, 0 replies; 5+ messages in thread
From: Alex Williamson @ 2015-01-29 16:13 UTC (permalink / raw)
  To: GAUGUEY Rémy 228890
  Cc: open list:VFIO DRIVER, will.deacon-5wv7dgnIgG8, open list,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Antonios Motakis, tech-lrHrjnjw1UfHK3s98zE1ajGjJy/sRE9J,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On Thu, 2015-01-29 at 12:24 +0000, GAUGUEY Rémy 228890 wrote:
> Hi Antonios, 
> 
> On Thu, 27 Nov 2014 18:22:53 Antonios Motakis wrote:
> >VFIO_IOMMU_TYPE1 keeps track for each domain it knows a list of protection
> >flags it always applies to all mappings in the domain. This is used for
> >domains that support IOMMU_CAP_CACHE_COHERENCY.
> >
> >Refactor this slightly, by keeping track instead that a given domain
> >supports the capability, and applying the IOMMU_CACHE protection flag when
> >doing the actual DMA mappings.
> >
> >This will allow us to reuse the behavior for IOMMU_CAP_NOEXEC, which we
> >also want to keep track of, but without applying it to all domains that
> >support it unless the user explicitly requests it.
> >
> >Signed-off-by: Antonios Motakis <a.motakis@xxxxxxxxxxxxxxxxxxxxxx>
> >---
> > drivers/vfio/vfio_iommu_type1.c | 25 +++++++++++++++++--------
> > 1 file changed, 17 insertions(+), 8 deletions(-)
> >
> >diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >index 4a9d666..c54dab8 100644
> >--- a/drivers/vfio/vfio_iommu_type1.c
> >+++ b/drivers/vfio/vfio_iommu_type1.c
> > 			if (ret)
> > 				return ret;
> > 
> >@@ -731,7 +740,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> > 	}
> > 
> > 	if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
> >-		domain->prot |= IOMMU_CACHE;
> >+		domain->caps |= IOMMU_CAP_CACHE_COHERENCY;
> 
> IMHO this is not good since IOMMU_CAP_CACHE_COHERENCY is not a bitfield but an enum 
> See include/linux/iommu.h 
> enum iommu_cap {
> 	IOMMU_CAP_CACHE_COHERENCY,	/* IOMMU can enforce cache coherent DMA
> 					   transactions */
> 	IOMMU_CAP_INTR_REMAP,		/* IOMMU supports interrupt isolation */
> 	IOMMU_CAP_NOEXEC,		/* IOMMU_NOEXEC flag */
> };

Good catch!

> One possible fix would to redefine the enum with values with bitfiled values
> enum iommu_cap {
> 	IOMMU_CAP_CACHE_COHERENCY = 1, /* IOMMU can enforce cache coherent DMA
> 					   transactions */
> 	IOMMU_CAP_INTR_REMAP = 2,	/* IOMMU supports interrupt isolation */
> 	IOMMU_CAP_NOEXEC = 4,		/* IOMMU_NOEXEC flag */
> };

This seems like a large imposition on the IOMMU code to conform to our
specific use case.  Perhaps the method used by the original code to
cache domain overriding mapping flags separately is the better approach.
Thanks,

Alex

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities
@ 2014-11-27 17:22   ` Antonios Motakis
  0 siblings, 0 replies; 5+ messages in thread
From: Antonios Motakis @ 2014-11-27 17:22 UTC (permalink / raw)
  To: kvmarm, iommu, alex.williamson
  Cc: will.deacon, tech, christoffer.dall, eric.auger, kim.phillips,
	marc.zyngier, Antonios Motakis, open list:VFIO DRIVER, open list

VFIO_IOMMU_TYPE1 keeps track for each domain it knows a list of protection
flags it always applies to all mappings in the domain. This is used for
domains that support IOMMU_CAP_CACHE_COHERENCY.

Refactor this slightly, by keeping track instead that a given domain
supports the capability, and applying the IOMMU_CACHE protection flag when
doing the actual DMA mappings.

This will allow us to reuse the behavior for IOMMU_CAP_NOEXEC, which we
also want to keep track of, but without applying it to all domains that
support it unless the user explicitly requests it.

Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
---
 drivers/vfio/vfio_iommu_type1.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 4a9d666..c54dab8 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -65,7 +65,7 @@ struct vfio_domain {
 	struct iommu_domain	*domain;
 	struct list_head	next;
 	struct list_head	group_list;
-	int			prot;		/* IOMMU_CACHE */
+	int			caps;
 };
 
 struct vfio_dma {
@@ -486,7 +486,7 @@ static int map_try_harder(struct vfio_domain *domain, dma_addr_t iova,
 	for (i = 0; i < npage; i++, pfn++, iova += PAGE_SIZE) {
 		ret = iommu_map(domain->domain, iova,
 				(phys_addr_t)pfn << PAGE_SHIFT,
-				PAGE_SIZE, prot | domain->prot);
+				PAGE_SIZE, prot);
 		if (ret)
 			break;
 	}
@@ -504,11 +504,16 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
 	int ret;
 
 	list_for_each_entry(d, &iommu->domain_list, next) {
+		int dprot = prot;
+
+		if (d->caps & IOMMU_CAP_CACHE_COHERENCY)
+			dprot |= IOMMU_CACHE;
+
 		ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
-				npage << PAGE_SHIFT, prot | d->prot);
+				npage << PAGE_SHIFT, dprot);
 		if (ret) {
 			if (ret != -EBUSY ||
-			    map_try_harder(d, iova, pfn, npage, prot))
+			    map_try_harder(d, iova, pfn, npage, dprot))
 				goto unwind;
 		}
 	}
@@ -621,6 +626,10 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
 	struct vfio_domain *d;
 	struct rb_node *n;
 	int ret;
+	int dprot = 0;
+
+	if (domain->caps & IOMMU_CAP_CACHE_COHERENCY)
+		dprot |= IOMMU_CACHE;
 
 	/* Arbitrarily pick the first domain in the list for lookups */
 	d = list_first_entry(&iommu->domain_list, struct vfio_domain, next);
@@ -654,7 +663,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
 				size += PAGE_SIZE;
 
 			ret = iommu_map(domain->domain, iova, phys,
-					size, dma->prot | domain->prot);
+					size, dma->prot | dprot);
 			if (ret)
 				return ret;
 
@@ -731,7 +740,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	}
 
 	if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
-		domain->prot |= IOMMU_CACHE;
+		domain->caps |= IOMMU_CAP_CACHE_COHERENCY;
 
 	/*
 	 * Try to match an existing compatible domain.  We don't want to
@@ -742,7 +751,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	 */
 	list_for_each_entry(d, &iommu->domain_list, next) {
 		if (d->domain->ops == domain->domain->ops &&
-		    d->prot == domain->prot) {
+		    d->caps == domain->caps) {
 			iommu_detach_group(domain->domain, iommu_group);
 			if (!iommu_attach_group(d->domain, iommu_group)) {
 				list_add(&group->next, &d->group_list);
@@ -884,7 +893,7 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
 
 	mutex_lock(&iommu->lock);
 	list_for_each_entry(domain, &iommu->domain_list, next) {
-		if (!(domain->prot & IOMMU_CACHE)) {
+		if (!(domain->caps & IOMMU_CAP_CACHE_COHERENCY)) {
 			ret = 0;
 			break;
 		}
-- 
2.1.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities
@ 2014-11-27 17:22   ` Antonios Motakis
  0 siblings, 0 replies; 5+ messages in thread
From: Antonios Motakis @ 2014-11-27 17:22 UTC (permalink / raw)
  To: kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	alex.williamson-H+wXaHxf7aLQT0dZR+AlfA
  Cc: open list:VFIO DRIVER, eric.auger-QSEj5FYQhm4dnm+yROfE0A,
	marc.zyngier-5wv7dgnIgG8, will.deacon-5wv7dgnIgG8, open list,
	Antonios Motakis, tech-lrHrjnjw1UfHK3s98zE1ajGjJy/sRE9J,
	christoffer.dall-QSEj5FYQhm4dnm+yROfE0A

VFIO_IOMMU_TYPE1 keeps track for each domain it knows a list of protection
flags it always applies to all mappings in the domain. This is used for
domains that support IOMMU_CAP_CACHE_COHERENCY.

Refactor this slightly, by keeping track instead that a given domain
supports the capability, and applying the IOMMU_CACHE protection flag when
doing the actual DMA mappings.

This will allow us to reuse the behavior for IOMMU_CAP_NOEXEC, which we
also want to keep track of, but without applying it to all domains that
support it unless the user explicitly requests it.

Signed-off-by: Antonios Motakis <a.motakis-lrHrjnjw1UfHK3s98zE1ajGjJy/sRE9J@public.gmane.org>
---
 drivers/vfio/vfio_iommu_type1.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 4a9d666..c54dab8 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -65,7 +65,7 @@ struct vfio_domain {
 	struct iommu_domain	*domain;
 	struct list_head	next;
 	struct list_head	group_list;
-	int			prot;		/* IOMMU_CACHE */
+	int			caps;
 };
 
 struct vfio_dma {
@@ -486,7 +486,7 @@ static int map_try_harder(struct vfio_domain *domain, dma_addr_t iova,
 	for (i = 0; i < npage; i++, pfn++, iova += PAGE_SIZE) {
 		ret = iommu_map(domain->domain, iova,
 				(phys_addr_t)pfn << PAGE_SHIFT,
-				PAGE_SIZE, prot | domain->prot);
+				PAGE_SIZE, prot);
 		if (ret)
 			break;
 	}
@@ -504,11 +504,16 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
 	int ret;
 
 	list_for_each_entry(d, &iommu->domain_list, next) {
+		int dprot = prot;
+
+		if (d->caps & IOMMU_CAP_CACHE_COHERENCY)
+			dprot |= IOMMU_CACHE;
+
 		ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
-				npage << PAGE_SHIFT, prot | d->prot);
+				npage << PAGE_SHIFT, dprot);
 		if (ret) {
 			if (ret != -EBUSY ||
-			    map_try_harder(d, iova, pfn, npage, prot))
+			    map_try_harder(d, iova, pfn, npage, dprot))
 				goto unwind;
 		}
 	}
@@ -621,6 +626,10 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
 	struct vfio_domain *d;
 	struct rb_node *n;
 	int ret;
+	int dprot = 0;
+
+	if (domain->caps & IOMMU_CAP_CACHE_COHERENCY)
+		dprot |= IOMMU_CACHE;
 
 	/* Arbitrarily pick the first domain in the list for lookups */
 	d = list_first_entry(&iommu->domain_list, struct vfio_domain, next);
@@ -654,7 +663,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
 				size += PAGE_SIZE;
 
 			ret = iommu_map(domain->domain, iova, phys,
-					size, dma->prot | domain->prot);
+					size, dma->prot | dprot);
 			if (ret)
 				return ret;
 
@@ -731,7 +740,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	}
 
 	if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
-		domain->prot |= IOMMU_CACHE;
+		domain->caps |= IOMMU_CAP_CACHE_COHERENCY;
 
 	/*
 	 * Try to match an existing compatible domain.  We don't want to
@@ -742,7 +751,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	 */
 	list_for_each_entry(d, &iommu->domain_list, next) {
 		if (d->domain->ops == domain->domain->ops &&
-		    d->prot == domain->prot) {
+		    d->caps == domain->caps) {
 			iommu_detach_group(domain->domain, iommu_group);
 			if (!iommu_attach_group(d->domain, iommu_group)) {
 				list_add(&group->next, &d->group_list);
@@ -884,7 +893,7 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
 
 	mutex_lock(&iommu->lock);
 	list_for_each_entry(domain, &iommu->domain_list, next) {
-		if (!(domain->prot & IOMMU_CACHE)) {
+		if (!(domain->caps & IOMMU_CAP_CACHE_COHERENCY)) {
 			ret = 0;
 			break;
 		}
-- 
2.1.3

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-01-29 16:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-29 12:24 [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities GAUGUEY Rémy 228890
2015-01-29 16:13 ` Alex Williamson
2015-01-29 16:13   ` Alex Williamson
  -- strict thread matches above, loose matches on Subject: below --
2014-11-27 17:22 [PATCH v3 0/6] vfio: type1: support for ARM SMMUS with VFIO_IOMMU_TYPE1 Antonios Motakis
2014-11-27 17:22 ` [PATCH v3 4/6] vfio: type1: replace domain wide protection flags with supported capabilities Antonios Motakis
2014-11-27 17:22   ` Antonios Motakis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.