All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
To: "Tian, Kevin" <kevin.tian@intel.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"eric.auger@redhat.com" <eric.auger@redhat.com>,
	"pmorel@linux.vnet.ibm.com" <pmorel@linux.vnet.ibm.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xuwei (O)" <xuwei5@huawei.com>, Linuxarm <linuxarm@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>
Subject: RE: [PATCH v5 2/7] vfio/type1: Check reserve region conflict and update iova list
Date: Mon, 19 Mar 2018 10:55:41 +0000	[thread overview]
Message-ID: <5FC3163CFD30C246ABAA99954A238FA838675F4F@FRAEML521-MBX.china.huawei.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D1910857ED@SHSMSX101.ccr.corp.intel.com>



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Monday, March 19, 2018 7:52 AM
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
> alex.williamson@redhat.com; eric.auger@redhat.com;
> pmorel@linux.vnet.ibm.com
> Cc: kvm@vger.kernel.org; linux-kernel@vger.kernel.org; xuwei (O)
> <xuwei5@huawei.com>; Linuxarm <linuxarm@huawei.com>;
> iommu@lists.linux-foundation.org
> Subject: RE: [PATCH v5 2/7] vfio/type1: Check reserve region conflict and
> update iova list
> 
> > From: Shameer Kolothum
> > Sent: Friday, March 16, 2018 12:35 AM
> >
> > This retrieves the reserved regions associated with dev group and
> > checks for conflicts with any existing dma mappings. Also update
> > the iova list excluding the reserved regions.
> >
> > Signed-off-by: Shameer Kolothum
> > <shameerali.kolothum.thodi@huawei.com>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 90
> > +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 90 insertions(+)
> >
> > diff --git a/drivers/vfio/vfio_iommu_type1.c
> > b/drivers/vfio/vfio_iommu_type1.c
> > index 1123c74..cfe2bb2 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -1313,6 +1313,82 @@ static int vfio_iommu_aper_resize(struct
> > list_head *iova,
> >  	return 0;
> >  }
> >
> > +/*
> > + * Check reserved region conflicts with existing dma mappings
> > + */
> > +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu,
> > +				struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *region;
> > +
> > +	/* Check for conflict with existing dma mappings */
> > +	list_for_each_entry(region, resv_regions, list) {
> > +		if (vfio_find_dma(iommu, region->start, region->length))
> > +			return true;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> > +/*
> > + * Check iova region overlap with  reserved regions and
> > + * exclude them from the iommu iova range
> > + */
> > +static int vfio_iommu_resv_exclude(struct list_head *iova,
> > +					struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *resv;
> > +	struct vfio_iova *n, *next;
> > +
> > +	list_for_each_entry(resv, resv_regions, list) {
> > +		phys_addr_t start, end;
> > +
> > +		start = resv->start;
> > +		end = resv->start + resv->length - 1;
> > +
> > +		list_for_each_entry_safe(n, next, iova, list) {
> > +			int ret = 0;
> > +
> > +			/* No overlap */
> > +			if ((start > n->end) || (end < n->start))
> > +				continue;
> > +			/*
> > +			 * Insert a new node if current node overlaps with
> > the
> > +			 * reserve region to exlude that from valid iova
> > range.
> > +			 * Note that, new node is inserted before the
> > current
> > +			 * node and finally the current node is deleted
> > keeping
> > +			 * the list updated and sorted.
> > +			 */
> > +			if (start > n->start)
> > +				ret = vfio_iommu_iova_insert(&n->list,
> > +							n->start, start - 1);
> > +			if (!ret && end < n->end)
> > +				ret = vfio_iommu_iova_insert(&n->list,
> > +							end + 1, n->end);
> > +			if (ret)
> > +				return ret;
> 
> Is it safer to delete the 1st node here in case of failure of the 2nd node?
> There is no problem with current logic since upon error iova_copy will
> be released anyway. However this function alone doesn't assume the
> fact of a temporary list, thus it's better to keep the list clean w/o garbage
> left from any error handling.

Agree. I will consider this.

> > +
> > +			list_del(&n->list);
> > +			kfree(n);
> > +		}
> > +	}
> > +
> > +	if (list_empty(iova))
> > +		return -EINVAL;
> 
> if list_empty should BUG_ON here? or do we really need this check?

I think we need the check here. This basically means there is no valid iova
region as the reserved regions overlaps it completely(very unlikely, a bad
configuration maybe). The __attach will fail if that happens and may be 
WARN_ON is a good idea to notify the user.

Thanks,
Shameer

> > +
> > +	return 0;
> > +}
> > +
> > +static void vfio_iommu_resv_free(struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *n, *next;
> > +
> > +	list_for_each_entry_safe(n, next, resv_regions, list) {
> > +		list_del(&n->list);
> > +		kfree(n);
> > +	}
> > +}
> > +
> >  static void vfio_iommu_iova_free(struct list_head *iova)
> >  {
> >  	struct vfio_iova *n, *next;
> > @@ -1366,6 +1442,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	phys_addr_t resv_msi_base;
> >  	struct iommu_domain_geometry geo;
> >  	LIST_HEAD(iova_copy);
> > +	LIST_HEAD(group_resv_regions);
> >
> >  	mutex_lock(&iommu->lock);
> >
> > @@ -1444,6 +1521,13 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  		goto out_detach;
> >  	}
> >
> > +	iommu_get_group_resv_regions(iommu_group,
> > &group_resv_regions);
> > +
> > +	if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) {
> > +		ret = -EINVAL;
> > +		goto out_detach;
> > +	}
> > +
> >  	/* Get a copy of the current iova list and work on it */
> >  	ret = vfio_iommu_iova_get_copy(iommu, &iova_copy);
> >  	if (ret)
> > @@ -1454,6 +1538,10 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	if (ret)
> >  		goto out_detach;
> >
> > +	ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions);
> > +	if (ret)
> > +		goto out_detach;
> > +
> >  	resv_msi = vfio_iommu_has_sw_msi(iommu_group,
> > &resv_msi_base);
> >
> >  	INIT_LIST_HEAD(&domain->group_list);
> > @@ -1514,6 +1602,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	/* Delete the old one and insert new iova list */
> >  	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> >  	mutex_unlock(&iommu->lock);
> > +	vfio_iommu_resv_free(&group_resv_regions);
> >
> >  	return 0;
> >
> > @@ -1522,6 +1611,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  out_domain:
> >  	iommu_domain_free(domain->domain);
> >  	vfio_iommu_iova_free(&iova_copy);
> > +	vfio_iommu_resv_free(&group_resv_regions);
> >  out_free:
> >  	kfree(domain);
> >  	kfree(group);
> > --
> > 2.7.4
> >
> >
> > _______________________________________________
> > iommu mailing list
> > iommu@lists.linux-foundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
To: "Tian,
	Kevin" <kevin.tian-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org"
	<alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org"
	<eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"pmorel-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org"
	<pmorel-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
Cc: "xuwei \(O\)" <xuwei5-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	"iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Linuxarm <linuxarm-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Subject: RE: [PATCH v5 2/7] vfio/type1: Check reserve region conflict and update	iova list
Date: Mon, 19 Mar 2018 10:55:41 +0000	[thread overview]
Message-ID: <5FC3163CFD30C246ABAA99954A238FA838675F4F@FRAEML521-MBX.china.huawei.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D1910857ED-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org]
> Sent: Monday, March 19, 2018 7:52 AM
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>;
> alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org;
> pmorel-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org
> Cc: kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; xuwei (O)
> <xuwei5-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>; Linuxarm <linuxarm-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>;
> iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Subject: RE: [PATCH v5 2/7] vfio/type1: Check reserve region conflict and
> update iova list
> 
> > From: Shameer Kolothum
> > Sent: Friday, March 16, 2018 12:35 AM
> >
> > This retrieves the reserved regions associated with dev group and
> > checks for conflicts with any existing dma mappings. Also update
> > the iova list excluding the reserved regions.
> >
> > Signed-off-by: Shameer Kolothum
> > <shameerali.kolothum.thodi-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 90
> > +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 90 insertions(+)
> >
> > diff --git a/drivers/vfio/vfio_iommu_type1.c
> > b/drivers/vfio/vfio_iommu_type1.c
> > index 1123c74..cfe2bb2 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -1313,6 +1313,82 @@ static int vfio_iommu_aper_resize(struct
> > list_head *iova,
> >  	return 0;
> >  }
> >
> > +/*
> > + * Check reserved region conflicts with existing dma mappings
> > + */
> > +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu,
> > +				struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *region;
> > +
> > +	/* Check for conflict with existing dma mappings */
> > +	list_for_each_entry(region, resv_regions, list) {
> > +		if (vfio_find_dma(iommu, region->start, region->length))
> > +			return true;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> > +/*
> > + * Check iova region overlap with  reserved regions and
> > + * exclude them from the iommu iova range
> > + */
> > +static int vfio_iommu_resv_exclude(struct list_head *iova,
> > +					struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *resv;
> > +	struct vfio_iova *n, *next;
> > +
> > +	list_for_each_entry(resv, resv_regions, list) {
> > +		phys_addr_t start, end;
> > +
> > +		start = resv->start;
> > +		end = resv->start + resv->length - 1;
> > +
> > +		list_for_each_entry_safe(n, next, iova, list) {
> > +			int ret = 0;
> > +
> > +			/* No overlap */
> > +			if ((start > n->end) || (end < n->start))
> > +				continue;
> > +			/*
> > +			 * Insert a new node if current node overlaps with
> > the
> > +			 * reserve region to exlude that from valid iova
> > range.
> > +			 * Note that, new node is inserted before the
> > current
> > +			 * node and finally the current node is deleted
> > keeping
> > +			 * the list updated and sorted.
> > +			 */
> > +			if (start > n->start)
> > +				ret = vfio_iommu_iova_insert(&n->list,
> > +							n->start, start - 1);
> > +			if (!ret && end < n->end)
> > +				ret = vfio_iommu_iova_insert(&n->list,
> > +							end + 1, n->end);
> > +			if (ret)
> > +				return ret;
> 
> Is it safer to delete the 1st node here in case of failure of the 2nd node?
> There is no problem with current logic since upon error iova_copy will
> be released anyway. However this function alone doesn't assume the
> fact of a temporary list, thus it's better to keep the list clean w/o garbage
> left from any error handling.

Agree. I will consider this.

> > +
> > +			list_del(&n->list);
> > +			kfree(n);
> > +		}
> > +	}
> > +
> > +	if (list_empty(iova))
> > +		return -EINVAL;
> 
> if list_empty should BUG_ON here? or do we really need this check?

I think we need the check here. This basically means there is no valid iova
region as the reserved regions overlaps it completely(very unlikely, a bad
configuration maybe). The __attach will fail if that happens and may be 
WARN_ON is a good idea to notify the user.

Thanks,
Shameer

> > +
> > +	return 0;
> > +}
> > +
> > +static void vfio_iommu_resv_free(struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *n, *next;
> > +
> > +	list_for_each_entry_safe(n, next, resv_regions, list) {
> > +		list_del(&n->list);
> > +		kfree(n);
> > +	}
> > +}
> > +
> >  static void vfio_iommu_iova_free(struct list_head *iova)
> >  {
> >  	struct vfio_iova *n, *next;
> > @@ -1366,6 +1442,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	phys_addr_t resv_msi_base;
> >  	struct iommu_domain_geometry geo;
> >  	LIST_HEAD(iova_copy);
> > +	LIST_HEAD(group_resv_regions);
> >
> >  	mutex_lock(&iommu->lock);
> >
> > @@ -1444,6 +1521,13 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  		goto out_detach;
> >  	}
> >
> > +	iommu_get_group_resv_regions(iommu_group,
> > &group_resv_regions);
> > +
> > +	if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) {
> > +		ret = -EINVAL;
> > +		goto out_detach;
> > +	}
> > +
> >  	/* Get a copy of the current iova list and work on it */
> >  	ret = vfio_iommu_iova_get_copy(iommu, &iova_copy);
> >  	if (ret)
> > @@ -1454,6 +1538,10 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	if (ret)
> >  		goto out_detach;
> >
> > +	ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions);
> > +	if (ret)
> > +		goto out_detach;
> > +
> >  	resv_msi = vfio_iommu_has_sw_msi(iommu_group,
> > &resv_msi_base);
> >
> >  	INIT_LIST_HEAD(&domain->group_list);
> > @@ -1514,6 +1602,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	/* Delete the old one and insert new iova list */
> >  	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> >  	mutex_unlock(&iommu->lock);
> > +	vfio_iommu_resv_free(&group_resv_regions);
> >
> >  	return 0;
> >
> > @@ -1522,6 +1611,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  out_domain:
> >  	iommu_domain_free(domain->domain);
> >  	vfio_iommu_iova_free(&iova_copy);
> > +	vfio_iommu_resv_free(&group_resv_regions);
> >  out_free:
> >  	kfree(domain);
> >  	kfree(group);
> > --
> > 2.7.4
> >
> >
> > _______________________________________________
> > iommu mailing list
> > iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> > https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
To: "Tian,
	Kevin" <kevin.tian-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org"
	<alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org"
	<eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"pmorel-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org"
	<pmorel-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
Cc: "xuwei (O)" <xuwei5-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	"iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Linuxarm <linuxarm-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Subject: RE: [PATCH v5 2/7] vfio/type1: Check reserve region conflict and update	iova list
Date: Mon, 19 Mar 2018 10:55:41 +0000	[thread overview]
Message-ID: <5FC3163CFD30C246ABAA99954A238FA838675F4F@FRAEML521-MBX.china.huawei.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D1910857ED-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org]
> Sent: Monday, March 19, 2018 7:52 AM
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>;
> alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org;
> pmorel-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org
> Cc: kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; xuwei (O)
> <xuwei5-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>; Linuxarm <linuxarm-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>;
> iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Subject: RE: [PATCH v5 2/7] vfio/type1: Check reserve region conflict and
> update iova list
> 
> > From: Shameer Kolothum
> > Sent: Friday, March 16, 2018 12:35 AM
> >
> > This retrieves the reserved regions associated with dev group and
> > checks for conflicts with any existing dma mappings. Also update
> > the iova list excluding the reserved regions.
> >
> > Signed-off-by: Shameer Kolothum
> > <shameerali.kolothum.thodi-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 90
> > +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 90 insertions(+)
> >
> > diff --git a/drivers/vfio/vfio_iommu_type1.c
> > b/drivers/vfio/vfio_iommu_type1.c
> > index 1123c74..cfe2bb2 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -1313,6 +1313,82 @@ static int vfio_iommu_aper_resize(struct
> > list_head *iova,
> >  	return 0;
> >  }
> >
> > +/*
> > + * Check reserved region conflicts with existing dma mappings
> > + */
> > +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu,
> > +				struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *region;
> > +
> > +	/* Check for conflict with existing dma mappings */
> > +	list_for_each_entry(region, resv_regions, list) {
> > +		if (vfio_find_dma(iommu, region->start, region->length))
> > +			return true;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> > +/*
> > + * Check iova region overlap with  reserved regions and
> > + * exclude them from the iommu iova range
> > + */
> > +static int vfio_iommu_resv_exclude(struct list_head *iova,
> > +					struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *resv;
> > +	struct vfio_iova *n, *next;
> > +
> > +	list_for_each_entry(resv, resv_regions, list) {
> > +		phys_addr_t start, end;
> > +
> > +		start = resv->start;
> > +		end = resv->start + resv->length - 1;
> > +
> > +		list_for_each_entry_safe(n, next, iova, list) {
> > +			int ret = 0;
> > +
> > +			/* No overlap */
> > +			if ((start > n->end) || (end < n->start))
> > +				continue;
> > +			/*
> > +			 * Insert a new node if current node overlaps with
> > the
> > +			 * reserve region to exlude that from valid iova
> > range.
> > +			 * Note that, new node is inserted before the
> > current
> > +			 * node and finally the current node is deleted
> > keeping
> > +			 * the list updated and sorted.
> > +			 */
> > +			if (start > n->start)
> > +				ret = vfio_iommu_iova_insert(&n->list,
> > +							n->start, start - 1);
> > +			if (!ret && end < n->end)
> > +				ret = vfio_iommu_iova_insert(&n->list,
> > +							end + 1, n->end);
> > +			if (ret)
> > +				return ret;
> 
> Is it safer to delete the 1st node here in case of failure of the 2nd node?
> There is no problem with current logic since upon error iova_copy will
> be released anyway. However this function alone doesn't assume the
> fact of a temporary list, thus it's better to keep the list clean w/o garbage
> left from any error handling.

Agree. I will consider this.

> > +
> > +			list_del(&n->list);
> > +			kfree(n);
> > +		}
> > +	}
> > +
> > +	if (list_empty(iova))
> > +		return -EINVAL;
> 
> if list_empty should BUG_ON here? or do we really need this check?

I think we need the check here. This basically means there is no valid iova
region as the reserved regions overlaps it completely(very unlikely, a bad
configuration maybe). The __attach will fail if that happens and may be 
WARN_ON is a good idea to notify the user.

Thanks,
Shameer

> > +
> > +	return 0;
> > +}
> > +
> > +static void vfio_iommu_resv_free(struct list_head *resv_regions)
> > +{
> > +	struct iommu_resv_region *n, *next;
> > +
> > +	list_for_each_entry_safe(n, next, resv_regions, list) {
> > +		list_del(&n->list);
> > +		kfree(n);
> > +	}
> > +}
> > +
> >  static void vfio_iommu_iova_free(struct list_head *iova)
> >  {
> >  	struct vfio_iova *n, *next;
> > @@ -1366,6 +1442,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	phys_addr_t resv_msi_base;
> >  	struct iommu_domain_geometry geo;
> >  	LIST_HEAD(iova_copy);
> > +	LIST_HEAD(group_resv_regions);
> >
> >  	mutex_lock(&iommu->lock);
> >
> > @@ -1444,6 +1521,13 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  		goto out_detach;
> >  	}
> >
> > +	iommu_get_group_resv_regions(iommu_group,
> > &group_resv_regions);
> > +
> > +	if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) {
> > +		ret = -EINVAL;
> > +		goto out_detach;
> > +	}
> > +
> >  	/* Get a copy of the current iova list and work on it */
> >  	ret = vfio_iommu_iova_get_copy(iommu, &iova_copy);
> >  	if (ret)
> > @@ -1454,6 +1538,10 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	if (ret)
> >  		goto out_detach;
> >
> > +	ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions);
> > +	if (ret)
> > +		goto out_detach;
> > +
> >  	resv_msi = vfio_iommu_has_sw_msi(iommu_group,
> > &resv_msi_base);
> >
> >  	INIT_LIST_HEAD(&domain->group_list);
> > @@ -1514,6 +1602,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  	/* Delete the old one and insert new iova list */
> >  	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> >  	mutex_unlock(&iommu->lock);
> > +	vfio_iommu_resv_free(&group_resv_regions);
> >
> >  	return 0;
> >
> > @@ -1522,6 +1611,7 @@ static int vfio_iommu_type1_attach_group(void
> > *iommu_data,
> >  out_domain:
> >  	iommu_domain_free(domain->domain);
> >  	vfio_iommu_iova_free(&iova_copy);
> > +	vfio_iommu_resv_free(&group_resv_regions);
> >  out_free:
> >  	kfree(domain);
> >  	kfree(group);
> > --
> > 2.7.4
> >
> >
> > _______________________________________________
> > iommu mailing list
> > iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> > https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2018-03-19 10:55 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-15 16:35 [PATCH v5 0/7] vfio/type1: Add support for valid iova list management Shameer Kolothum
2018-03-15 16:35 ` Shameer Kolothum
2018-03-15 16:35 ` [PATCH v5 1/7] vfio/type1: Introduce iova list and add iommu aperture validity check Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35 ` [PATCH v5 2/7] vfio/type1: Check reserve region conflict and update iova list Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-19  7:51   ` Tian, Kevin
2018-03-19  7:51     ` Tian, Kevin
2018-03-19 10:55     ` Shameerali Kolothum Thodi [this message]
2018-03-19 10:55       ` Shameerali Kolothum Thodi
2018-03-19 10:55       ` Shameerali Kolothum Thodi
2018-03-19 12:16       ` Tian, Kevin
2018-03-19 12:16         ` Tian, Kevin
2018-03-19 12:16         ` Tian, Kevin
2018-03-20 22:37     ` Alex Williamson
2018-03-20 22:37       ` Alex Williamson
2018-03-21  3:30       ` Tian, Kevin
2018-03-21  3:30         ` Tian, Kevin
2018-03-21 16:31         ` Alex Williamson
2018-03-21 16:31           ` Alex Williamson
2018-03-22  9:15           ` Shameerali Kolothum Thodi
2018-03-22  9:15             ` Shameerali Kolothum Thodi
2018-03-22  9:15             ` Shameerali Kolothum Thodi
2018-03-15 16:35 ` [PATCH v5 3/7] vfio/type1: Update iova list on detach Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35 ` [PATCH v5 4/7] vfio/type1: check dma map request is within a valid iova range Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35 ` [PATCH v5 5/7] vfio/type1: Add IOVA range capability support Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35 ` [PATCH v5 6/7] vfio/type1: remove duplicate retrieval of reserved regions Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35 ` [PATCH v5 7/7] iommu/dma: Move PCI window region reservation back into dma specific path Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-15 16:35   ` Shameer Kolothum
2018-03-22 16:21   ` Alex Williamson
2018-03-22 16:21     ` Alex Williamson
2018-03-22 17:22     ` Robin Murphy
2018-03-22 17:22       ` Robin Murphy
2018-03-23  8:57       ` Shameerali Kolothum Thodi
2018-03-23  8:57         ` Shameerali Kolothum Thodi
2018-03-23  8:57         ` Shameerali Kolothum Thodi
2018-03-28 13:41         ` Shameerali Kolothum Thodi
2018-03-28 13:41           ` Shameerali Kolothum Thodi
2018-03-28 13:41           ` Shameerali Kolothum Thodi
2018-03-19  8:28 ` [PATCH v5 0/7] vfio/type1: Add support for valid iova list management Tian, Kevin
2018-03-19  8:28   ` Tian, Kevin
2018-03-19 10:54   ` Shameerali Kolothum Thodi
2018-03-19 10:54     ` Shameerali Kolothum Thodi
2018-03-19 10:54     ` Shameerali Kolothum Thodi
2018-03-19 12:12     ` Tian, Kevin
2018-03-19 12:12       ` Tian, Kevin
2018-03-19 12:12       ` Tian, Kevin
2018-03-20 22:55   ` Alex Williamson
2018-03-20 22:55     ` Alex Williamson
2018-03-21  3:28     ` Tian, Kevin
2018-03-21  3:28       ` Tian, Kevin
2018-03-21 17:11       ` Alex Williamson
2018-03-22  9:10         ` Tian, Kevin
2018-03-22  9:10           ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5FC3163CFD30C246ABAA99954A238FA838675F4F@FRAEML521-MBX.china.huawei.com \
    --to=shameerali.kolothum.thodi@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=pmorel@linux.vnet.ibm.com \
    --cc=xuwei5@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.