From: Auger Eric <eric.auger@redhat.com> To: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linuxarm@huawei.com, john.garry@huawei.com, xuwei5@hisilicon.com, kevin.tian@intel.com Subject: Re: [PATCH v8 2/6] vfio/type1: Check reserved region conflict and update iova list Date: Fri, 26 Jul 2019 19:09:43 +0200 [thread overview] Message-ID: <f8cf7545-2f8c-d1d9-dc51-e787c1a881fb@redhat.com> (raw) In-Reply-To: <20190723160637.8384-3-shameerali.kolothum.thodi@huawei.com> Hi Shameer, On 7/23/19 6:06 PM, Shameer Kolothum wrote: > This retrieves the reserved regions associated with dev group and > checks for conflicts with any existing dma mappings. Also update > the iova list excluding the reserved regions. > > Reserved regions with type IOMMU_RESV_DIRECT_RELAXABLE are > excluded from above checks as they are considered as directly > mapped regions which are known to be relaxable. > > Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Thanks Eric > --- > v7-->v8 > -Added check for iommu_get_group_resv_regions() error ret. > > --- > drivers/vfio/vfio_iommu_type1.c | 98 +++++++++++++++++++++++++++++++++ > 1 file changed, 98 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 6a69652b406b..a3c9794ccf83 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -1496,6 +1496,88 @@ static int vfio_iommu_aper_resize(struct list_head *iova, > return 0; > } > > +/* > + * Check reserved region conflicts with existing dma mappings > + */ > +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu, > + struct list_head *resv_regions) > +{ > + struct iommu_resv_region *region; > + > + /* Check for conflict with existing dma mappings */ > + list_for_each_entry(region, resv_regions, list) { > + if (region->type == IOMMU_RESV_DIRECT_RELAXABLE) > + continue; > + > + if (vfio_find_dma(iommu, region->start, region->length)) > + return true; > + } > + > + return false; > +} > + > +/* > + * Check iova region overlap with reserved regions and > + * exclude them from the iommu iova range > + */ > +static int vfio_iommu_resv_exclude(struct list_head *iova, > + struct list_head *resv_regions) > +{ > + struct iommu_resv_region *resv; > + struct vfio_iova *n, *next; > + > + list_for_each_entry(resv, resv_regions, list) { > + phys_addr_t start, end; > + > + if (resv->type == IOMMU_RESV_DIRECT_RELAXABLE) > + continue; > + > + start = resv->start; > + end = resv->start + resv->length - 1; > + > + list_for_each_entry_safe(n, next, iova, list) { > + int ret = 0; > + > + /* No overlap */ > + if (start > n->end || end < n->start) > + continue; > + /* > + * Insert a new node if current node overlaps with the > + * reserve region to exlude that from valid iova range. > + * Note that, new node is inserted before the current > + * node and finally the current node is deleted keeping > + * the list updated and sorted. > + */ > + if (start > n->start) > + ret = vfio_iommu_iova_insert(&n->list, n->start, > + start - 1); > + if (!ret && end < n->end) > + ret = vfio_iommu_iova_insert(&n->list, end + 1, > + n->end); > + if (ret) > + return ret; > + > + list_del(&n->list); > + kfree(n); > + } > + } > + > + if (list_empty(iova)) > + return -EINVAL; > + > + return 0; > +} > + > +static void vfio_iommu_resv_free(struct list_head *resv_regions) > +{ > + struct iommu_resv_region *n, *next; > + > + list_for_each_entry_safe(n, next, resv_regions, list) { > + list_del(&n->list); > + kfree(n); > + } > +} > + > static void vfio_iommu_iova_free(struct list_head *iova) > { > struct vfio_iova *n, *next; > @@ -1547,6 +1629,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > phys_addr_t resv_msi_base; > struct iommu_domain_geometry geo; > LIST_HEAD(iova_copy); > + LIST_HEAD(group_resv_regions); > > mutex_lock(&iommu->lock); > > @@ -1632,6 +1715,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > goto out_detach; > } > > + ret = iommu_get_group_resv_regions(iommu_group, &group_resv_regions); > + if (ret) > + goto out_detach; > + > + if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) { > + ret = -EINVAL; > + goto out_detach; > + } > + > /* > * We don't want to work on the original iova list as the list > * gets modified and in case of failure we have to retain the > @@ -1646,6 +1738,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > if (ret) > goto out_detach; > > + ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions); > + if (ret) > + goto out_detach; > + > resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); > > INIT_LIST_HEAD(&domain->group_list); > @@ -1706,6 +1802,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > /* Delete the old one and insert new iova list */ > vfio_iommu_iova_insert_copy(iommu, &iova_copy); > mutex_unlock(&iommu->lock); > + vfio_iommu_resv_free(&group_resv_regions); > > return 0; > > @@ -1714,6 +1811,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > out_domain: > iommu_domain_free(domain->domain); > vfio_iommu_iova_free(&iova_copy); > + vfio_iommu_resv_free(&group_resv_regions); > out_free: > kfree(domain); > kfree(group); >
WARNING: multiple messages have this Message-ID (diff)
From: Auger Eric <eric.auger@redhat.com> To: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>, alex.williamson@redhat.com Cc: kevin.tian@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, xuwei5@hisilicon.com, linuxarm@huawei.com, iommu@lists.linux-foundation.org Subject: Re: [PATCH v8 2/6] vfio/type1: Check reserved region conflict and update iova list Date: Fri, 26 Jul 2019 19:09:43 +0200 [thread overview] Message-ID: <f8cf7545-2f8c-d1d9-dc51-e787c1a881fb@redhat.com> (raw) In-Reply-To: <20190723160637.8384-3-shameerali.kolothum.thodi@huawei.com> Hi Shameer, On 7/23/19 6:06 PM, Shameer Kolothum wrote: > This retrieves the reserved regions associated with dev group and > checks for conflicts with any existing dma mappings. Also update > the iova list excluding the reserved regions. > > Reserved regions with type IOMMU_RESV_DIRECT_RELAXABLE are > excluded from above checks as they are considered as directly > mapped regions which are known to be relaxable. > > Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Thanks Eric > --- > v7-->v8 > -Added check for iommu_get_group_resv_regions() error ret. > > --- > drivers/vfio/vfio_iommu_type1.c | 98 +++++++++++++++++++++++++++++++++ > 1 file changed, 98 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 6a69652b406b..a3c9794ccf83 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -1496,6 +1496,88 @@ static int vfio_iommu_aper_resize(struct list_head *iova, > return 0; > } > > +/* > + * Check reserved region conflicts with existing dma mappings > + */ > +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu, > + struct list_head *resv_regions) > +{ > + struct iommu_resv_region *region; > + > + /* Check for conflict with existing dma mappings */ > + list_for_each_entry(region, resv_regions, list) { > + if (region->type == IOMMU_RESV_DIRECT_RELAXABLE) > + continue; > + > + if (vfio_find_dma(iommu, region->start, region->length)) > + return true; > + } > + > + return false; > +} > + > +/* > + * Check iova region overlap with reserved regions and > + * exclude them from the iommu iova range > + */ > +static int vfio_iommu_resv_exclude(struct list_head *iova, > + struct list_head *resv_regions) > +{ > + struct iommu_resv_region *resv; > + struct vfio_iova *n, *next; > + > + list_for_each_entry(resv, resv_regions, list) { > + phys_addr_t start, end; > + > + if (resv->type == IOMMU_RESV_DIRECT_RELAXABLE) > + continue; > + > + start = resv->start; > + end = resv->start + resv->length - 1; > + > + list_for_each_entry_safe(n, next, iova, list) { > + int ret = 0; > + > + /* No overlap */ > + if (start > n->end || end < n->start) > + continue; > + /* > + * Insert a new node if current node overlaps with the > + * reserve region to exlude that from valid iova range. > + * Note that, new node is inserted before the current > + * node and finally the current node is deleted keeping > + * the list updated and sorted. > + */ > + if (start > n->start) > + ret = vfio_iommu_iova_insert(&n->list, n->start, > + start - 1); > + if (!ret && end < n->end) > + ret = vfio_iommu_iova_insert(&n->list, end + 1, > + n->end); > + if (ret) > + return ret; > + > + list_del(&n->list); > + kfree(n); > + } > + } > + > + if (list_empty(iova)) > + return -EINVAL; > + > + return 0; > +} > + > +static void vfio_iommu_resv_free(struct list_head *resv_regions) > +{ > + struct iommu_resv_region *n, *next; > + > + list_for_each_entry_safe(n, next, resv_regions, list) { > + list_del(&n->list); > + kfree(n); > + } > +} > + > static void vfio_iommu_iova_free(struct list_head *iova) > { > struct vfio_iova *n, *next; > @@ -1547,6 +1629,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > phys_addr_t resv_msi_base; > struct iommu_domain_geometry geo; > LIST_HEAD(iova_copy); > + LIST_HEAD(group_resv_regions); > > mutex_lock(&iommu->lock); > > @@ -1632,6 +1715,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > goto out_detach; > } > > + ret = iommu_get_group_resv_regions(iommu_group, &group_resv_regions); > + if (ret) > + goto out_detach; > + > + if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) { > + ret = -EINVAL; > + goto out_detach; > + } > + > /* > * We don't want to work on the original iova list as the list > * gets modified and in case of failure we have to retain the > @@ -1646,6 +1738,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > if (ret) > goto out_detach; > > + ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions); > + if (ret) > + goto out_detach; > + > resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); > > INIT_LIST_HEAD(&domain->group_list); > @@ -1706,6 +1802,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > /* Delete the old one and insert new iova list */ > vfio_iommu_iova_insert_copy(iommu, &iova_copy); > mutex_unlock(&iommu->lock); > + vfio_iommu_resv_free(&group_resv_regions); > > return 0; > > @@ -1714,6 +1811,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, > out_domain: > iommu_domain_free(domain->domain); > vfio_iommu_iova_free(&iova_copy); > + vfio_iommu_resv_free(&group_resv_regions); > out_free: > kfree(domain); > kfree(group); > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2019-07-26 17:09 UTC|newest] Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-07-23 16:06 [PATCH v8 0/6] vfio/type1: Add support for valid iova list management Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-07-23 16:06 ` [PATCH v8 1/6] vfio/type1: Introduce iova list and add iommu aperture validity check Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-07-26 17:09 ` Auger Eric 2019-07-26 17:09 ` Auger Eric 2019-07-23 16:06 ` [PATCH v8 2/6] vfio/type1: Check reserved region conflict and update iova list Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-07-26 17:09 ` Auger Eric [this message] 2019-07-26 17:09 ` Auger Eric 2019-07-23 16:06 ` [PATCH v8 3/6] vfio/type1: Update iova list on detach Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-07-26 17:09 ` Auger Eric 2019-07-26 17:09 ` Auger Eric 2019-07-23 16:06 ` [PATCH v8 4/6] vfio/type1: check dma map request is within a valid iova range Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-07-23 16:06 ` [PATCH v8 5/6] vfio/type1: Add IOVA range capability support Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-07-23 16:06 ` [PATCH v8 6/6] vfio/type1: remove duplicate retrieval of reserved regions Shameer Kolothum 2019-07-23 16:06 ` Shameer Kolothum 2019-08-23 9:02 ` [PATCH v8 0/6] vfio/type1: Add support for valid iova list management Shameerali Kolothum Thodi 2019-08-23 9:02 ` Shameerali Kolothum Thodi 2019-08-23 20:51 ` Alex Williamson 2019-08-23 20:51 ` Alex Williamson
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=f8cf7545-2f8c-d1d9-dc51-e787c1a881fb@redhat.com \ --to=eric.auger@redhat.com \ --cc=alex.williamson@redhat.com \ --cc=iommu@lists.linux-foundation.org \ --cc=john.garry@huawei.com \ --cc=kevin.tian@intel.com \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linuxarm@huawei.com \ --cc=shameerali.kolothum.thodi@huawei.com \ --cc=xuwei5@hisilicon.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.