All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org,
	linux-mm@kvack.org, joro@8bytes.org, robh+dt@kernel.org,
	mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org,
	robin.murphy@arm.com, kevin.tian@intel.com,
	baolu.lu@linux.intel.com, jacob.jun.pan@linux.intel.com,
	christian.koenig@amd.com, yi.l.liu@intel.com,
	zhangfei.gao@linaro.org,
	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Subject: Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces
Date: Fri, 28 Feb 2020 15:43:04 +0100	[thread overview]
Message-ID: <20200228144304.GC2156@myrica> (raw)
In-Reply-To: <20200226123506.000076fb@Huawei.com>

On Wed, Feb 26, 2020 at 12:35:06PM +0000, Jonathan Cameron wrote:
> > + * A single Process Address Space ID (PASID) is allocated for each mm. In the
> > + * example, devices use PASID 1 to read/write into address space X and PASID 2
> > + * to read/write into address space Y. Calling iommu_sva_get_pasid() on bond 1
> > + * returns 1, and calling it on bonds 2-4 returns 2.
> > + *
> > + * Hardware tables describing this configuration in the IOMMU would typically
> > + * look like this:
> > + *
> > + *                                PASID tables
> > + *                                 of domain A
> > + *                              .->+--------+
> > + *                             / 0 |        |-------> io_pgtable
> > + *                            /    +--------+
> > + *            Device tables  /   1 |        |-------> pgd X
> > + *              +--------+  /      +--------+
> > + *      00:00.0 |      A |-'     2 |        |--.
> > + *              +--------+         +--------+   \
> > + *              :        :       3 |        |    \
> > + *              +--------+         +--------+     --> pgd Y
> > + *      00:01.0 |      B |--.                    /
> > + *              +--------+   \                  |
> > + *      00:01.1 |      B |----+   PASID tables  |
> > + *              +--------+     \   of domain B  |
> > + *                              '->+--------+   |
> > + *                               0 |        |-- | --> io_pgtable
> > + *                                 +--------+   |
> > + *                               1 |        |   |
> > + *                                 +--------+   |
> > + *                               2 |        |---'
> > + *                                 +--------+
> > + *                               3 |        |
> > + *                                 +--------+
> > + *
> > + * With this model, a single call binds all devices in a given domain to an
> > + * address space. Other devices in the domain will get the same bond implicitly.
> > + * However, users must issue one bind() for each device, because IOMMUs may
> > + * implement SVA differently. Furthermore, mandating one bind() per device
> > + * allows the driver to perform sanity-checks on device capabilities.
> 
> > + *
> > + * In some IOMMUs, one entry of the PASID table (typically the first one) can
> > + * hold non-PASID translations. In this case PASID 0 is reserved and the first
> > + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable
> > + * pointer is held in the device table and PASID 0 is available to the
> > + * allocator.
> 
> Is it worth hammering home in here that we can only do this because the PASID space
> is global (with exception of PASID 0)?  It's a convenient simplification but not
> necessarily a hardware restriction so perhaps we should remind people somewhere in here?

I could add this four paragraphs up:

"A single Process Address Space ID (PASID) is allocated for each mm. It is
a choice made for the Linux SVA implementation, not a hardware
restriction."

> > + */
> > +
> > +struct io_mm {
> > +	struct list_head		devices;
> > +	struct mm_struct		*mm;
> > +	struct mmu_notifier		notifier;
> > +
> > +	/* Late initialization */
> > +	const struct io_mm_ops		*ops;
> > +	void				*ctx;
> > +	int				pasid;
> > +};
> > +
> > +#define to_io_mm(mmu_notifier)	container_of(mmu_notifier, struct io_mm, notifier)
> > +#define to_iommu_bond(handle)	container_of(handle, struct iommu_bond, sva)
> 
> Code ordering wise, do we want this after the definition of iommu_bond?
> 
> For both of these it's a bit non obvious what they come 'from'.
> I wouldn't naturally assume to_io_mm gets me from notifier to the io_mm
> for example.  Not sure it matters though if these are only used in a few
> places.

Right, I can rename the first one to mn_to_io_mm(). The second one I think
might be good enough.


> > +static struct iommu_sva *
> > +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata)
> > +{
> > +	int ret = 0;
> 
> I'm fairly sure this is set in all paths below.  Now, of course the
> compiler might not think that in which case fair enough :)
> 
> > +	bool attach_domain = true;
> > +	struct iommu_bond *bond, *tmp;
> > +	struct iommu_domain *domain, *other;
> > +	struct iommu_sva_param *param = dev->iommu_param->sva_param;
> > +
> > +	domain = iommu_get_domain_for_dev(dev);
> > +
> > +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> > +	if (!bond)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	bond->sva.dev	= dev;
> > +	bond->drvdata	= drvdata;
> > +	refcount_set(&bond->refs, 1);
> > +	RCU_INIT_POINTER(bond->io_mm, io_mm);
> > +
> > +	mutex_lock(&iommu_sva_lock);
> > +	/* Is it already bound to the device or domain? */
> > +	list_for_each_entry(tmp, &io_mm->devices, mm_head) {
> > +		if (tmp->sva.dev != dev) {
> > +			other = iommu_get_domain_for_dev(tmp->sva.dev);
> > +			if (domain == other)
> > +				attach_domain = false;
> > +
> > +			continue;
> > +		}
> > +
> > +		if (WARN_ON(tmp->drvdata != drvdata)) {
> > +			ret = -EINVAL;
> > +			goto err_free;
> > +		}
> > +
> > +		/*
> > +		 * Hold a single io_mm reference per bond. Note that we can't
> > +		 * return an error after this, otherwise the caller would drop
> > +		 * an additional reference to the io_mm.
> > +		 */
> > +		refcount_inc(&tmp->refs);
> > +		io_mm_put(io_mm);
> > +		kfree(bond);
> 
> Free outside the lock would be ever so slightly more logical given we allocated
> before taking the lock.
> 
> > +		mutex_unlock(&iommu_sva_lock);
> > +		return &tmp->sva;
> > +	}
> > +
> > +	list_add_rcu(&bond->mm_head, &io_mm->devices);
> > +	param->nr_bonds++;
> > +	mutex_unlock(&iommu_sva_lock);
> > +
> > +	ret = io_mm->ops->attach(bond->sva.dev, io_mm->pasid, io_mm->ctx,
> > +				 attach_domain);
> > +	if (ret)
> > +		goto err_remove;
> > +
> > +	return &bond->sva;
> > +
> > +err_remove:
> > +	/*
> > +	 * At this point concurrent threads may have started to access the
> > +	 * io_mm->devices list in order to invalidate address ranges, which
> > +	 * requires to free the bond via kfree_rcu()
> > +	 */
> > +	mutex_lock(&iommu_sva_lock);
> > +	param->nr_bonds--;
> > +	list_del_rcu(&bond->mm_head);
> > +
> > +err_free:
> > +	mutex_unlock(&iommu_sva_lock);
> > +	kfree_rcu(bond, rcu_head);
> 
> I don't suppose it matters really but we don't need the rcu free if
> we follow the err_free goto.  Perhaps we are cleaner in this case
> to not use a unified exit path but do that case inline?

Agreed, though I moved the kzalloc() later as suggested by Jacob, I think
it looks a little better and simplifies the error paths

Thanks,
Jean

WARNING: multiple messages have this Message-ID (diff)
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: mark.rutland@arm.com, devicetree@vger.kernel.org,
	kevin.tian@intel.com,
	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>,
	linux-pci@vger.kernel.org, robin.murphy@arm.com,
	linux-mm@kvack.org, iommu@lists.linux-foundation.org,
	robh+dt@kernel.org, catalin.marinas@arm.com,
	zhangfei.gao@linaro.org, will@kernel.org,
	christian.koenig@amd.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces
Date: Fri, 28 Feb 2020 15:43:04 +0100	[thread overview]
Message-ID: <20200228144304.GC2156@myrica> (raw)
In-Reply-To: <20200226123506.000076fb@Huawei.com>

On Wed, Feb 26, 2020 at 12:35:06PM +0000, Jonathan Cameron wrote:
> > + * A single Process Address Space ID (PASID) is allocated for each mm. In the
> > + * example, devices use PASID 1 to read/write into address space X and PASID 2
> > + * to read/write into address space Y. Calling iommu_sva_get_pasid() on bond 1
> > + * returns 1, and calling it on bonds 2-4 returns 2.
> > + *
> > + * Hardware tables describing this configuration in the IOMMU would typically
> > + * look like this:
> > + *
> > + *                                PASID tables
> > + *                                 of domain A
> > + *                              .->+--------+
> > + *                             / 0 |        |-------> io_pgtable
> > + *                            /    +--------+
> > + *            Device tables  /   1 |        |-------> pgd X
> > + *              +--------+  /      +--------+
> > + *      00:00.0 |      A |-'     2 |        |--.
> > + *              +--------+         +--------+   \
> > + *              :        :       3 |        |    \
> > + *              +--------+         +--------+     --> pgd Y
> > + *      00:01.0 |      B |--.                    /
> > + *              +--------+   \                  |
> > + *      00:01.1 |      B |----+   PASID tables  |
> > + *              +--------+     \   of domain B  |
> > + *                              '->+--------+   |
> > + *                               0 |        |-- | --> io_pgtable
> > + *                                 +--------+   |
> > + *                               1 |        |   |
> > + *                                 +--------+   |
> > + *                               2 |        |---'
> > + *                                 +--------+
> > + *                               3 |        |
> > + *                                 +--------+
> > + *
> > + * With this model, a single call binds all devices in a given domain to an
> > + * address space. Other devices in the domain will get the same bond implicitly.
> > + * However, users must issue one bind() for each device, because IOMMUs may
> > + * implement SVA differently. Furthermore, mandating one bind() per device
> > + * allows the driver to perform sanity-checks on device capabilities.
> 
> > + *
> > + * In some IOMMUs, one entry of the PASID table (typically the first one) can
> > + * hold non-PASID translations. In this case PASID 0 is reserved and the first
> > + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable
> > + * pointer is held in the device table and PASID 0 is available to the
> > + * allocator.
> 
> Is it worth hammering home in here that we can only do this because the PASID space
> is global (with exception of PASID 0)?  It's a convenient simplification but not
> necessarily a hardware restriction so perhaps we should remind people somewhere in here?

I could add this four paragraphs up:

"A single Process Address Space ID (PASID) is allocated for each mm. It is
a choice made for the Linux SVA implementation, not a hardware
restriction."

> > + */
> > +
> > +struct io_mm {
> > +	struct list_head		devices;
> > +	struct mm_struct		*mm;
> > +	struct mmu_notifier		notifier;
> > +
> > +	/* Late initialization */
> > +	const struct io_mm_ops		*ops;
> > +	void				*ctx;
> > +	int				pasid;
> > +};
> > +
> > +#define to_io_mm(mmu_notifier)	container_of(mmu_notifier, struct io_mm, notifier)
> > +#define to_iommu_bond(handle)	container_of(handle, struct iommu_bond, sva)
> 
> Code ordering wise, do we want this after the definition of iommu_bond?
> 
> For both of these it's a bit non obvious what they come 'from'.
> I wouldn't naturally assume to_io_mm gets me from notifier to the io_mm
> for example.  Not sure it matters though if these are only used in a few
> places.

Right, I can rename the first one to mn_to_io_mm(). The second one I think
might be good enough.


> > +static struct iommu_sva *
> > +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata)
> > +{
> > +	int ret = 0;
> 
> I'm fairly sure this is set in all paths below.  Now, of course the
> compiler might not think that in which case fair enough :)
> 
> > +	bool attach_domain = true;
> > +	struct iommu_bond *bond, *tmp;
> > +	struct iommu_domain *domain, *other;
> > +	struct iommu_sva_param *param = dev->iommu_param->sva_param;
> > +
> > +	domain = iommu_get_domain_for_dev(dev);
> > +
> > +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> > +	if (!bond)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	bond->sva.dev	= dev;
> > +	bond->drvdata	= drvdata;
> > +	refcount_set(&bond->refs, 1);
> > +	RCU_INIT_POINTER(bond->io_mm, io_mm);
> > +
> > +	mutex_lock(&iommu_sva_lock);
> > +	/* Is it already bound to the device or domain? */
> > +	list_for_each_entry(tmp, &io_mm->devices, mm_head) {
> > +		if (tmp->sva.dev != dev) {
> > +			other = iommu_get_domain_for_dev(tmp->sva.dev);
> > +			if (domain == other)
> > +				attach_domain = false;
> > +
> > +			continue;
> > +		}
> > +
> > +		if (WARN_ON(tmp->drvdata != drvdata)) {
> > +			ret = -EINVAL;
> > +			goto err_free;
> > +		}
> > +
> > +		/*
> > +		 * Hold a single io_mm reference per bond. Note that we can't
> > +		 * return an error after this, otherwise the caller would drop
> > +		 * an additional reference to the io_mm.
> > +		 */
> > +		refcount_inc(&tmp->refs);
> > +		io_mm_put(io_mm);
> > +		kfree(bond);
> 
> Free outside the lock would be ever so slightly more logical given we allocated
> before taking the lock.
> 
> > +		mutex_unlock(&iommu_sva_lock);
> > +		return &tmp->sva;
> > +	}
> > +
> > +	list_add_rcu(&bond->mm_head, &io_mm->devices);
> > +	param->nr_bonds++;
> > +	mutex_unlock(&iommu_sva_lock);
> > +
> > +	ret = io_mm->ops->attach(bond->sva.dev, io_mm->pasid, io_mm->ctx,
> > +				 attach_domain);
> > +	if (ret)
> > +		goto err_remove;
> > +
> > +	return &bond->sva;
> > +
> > +err_remove:
> > +	/*
> > +	 * At this point concurrent threads may have started to access the
> > +	 * io_mm->devices list in order to invalidate address ranges, which
> > +	 * requires to free the bond via kfree_rcu()
> > +	 */
> > +	mutex_lock(&iommu_sva_lock);
> > +	param->nr_bonds--;
> > +	list_del_rcu(&bond->mm_head);
> > +
> > +err_free:
> > +	mutex_unlock(&iommu_sva_lock);
> > +	kfree_rcu(bond, rcu_head);
> 
> I don't suppose it matters really but we don't need the rcu free if
> we follow the err_free goto.  Perhaps we are cleaner in this case
> to not use a unified exit path but do that case inline?

Agreed, though I moved the kzalloc() later as suggested by Jacob, I think
it looks a little better and simplifies the error paths

Thanks,
Jean
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: mark.rutland@arm.com, devicetree@vger.kernel.org,
	kevin.tian@intel.com, jacob.jun.pan@linux.intel.com,
	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>,
	linux-pci@vger.kernel.org, joro@8bytes.org, robin.murphy@arm.com,
	linux-mm@kvack.org, iommu@lists.linux-foundation.org,
	robh+dt@kernel.org, yi.l.liu@intel.com, catalin.marinas@arm.com,
	zhangfei.gao@linaro.org, will@kernel.org,
	christian.koenig@amd.com, linux-arm-kernel@lists.infradead.org,
	baolu.lu@linux.intel.com
Subject: Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces
Date: Fri, 28 Feb 2020 15:43:04 +0100	[thread overview]
Message-ID: <20200228144304.GC2156@myrica> (raw)
In-Reply-To: <20200226123506.000076fb@Huawei.com>

On Wed, Feb 26, 2020 at 12:35:06PM +0000, Jonathan Cameron wrote:
> > + * A single Process Address Space ID (PASID) is allocated for each mm. In the
> > + * example, devices use PASID 1 to read/write into address space X and PASID 2
> > + * to read/write into address space Y. Calling iommu_sva_get_pasid() on bond 1
> > + * returns 1, and calling it on bonds 2-4 returns 2.
> > + *
> > + * Hardware tables describing this configuration in the IOMMU would typically
> > + * look like this:
> > + *
> > + *                                PASID tables
> > + *                                 of domain A
> > + *                              .->+--------+
> > + *                             / 0 |        |-------> io_pgtable
> > + *                            /    +--------+
> > + *            Device tables  /   1 |        |-------> pgd X
> > + *              +--------+  /      +--------+
> > + *      00:00.0 |      A |-'     2 |        |--.
> > + *              +--------+         +--------+   \
> > + *              :        :       3 |        |    \
> > + *              +--------+         +--------+     --> pgd Y
> > + *      00:01.0 |      B |--.                    /
> > + *              +--------+   \                  |
> > + *      00:01.1 |      B |----+   PASID tables  |
> > + *              +--------+     \   of domain B  |
> > + *                              '->+--------+   |
> > + *                               0 |        |-- | --> io_pgtable
> > + *                                 +--------+   |
> > + *                               1 |        |   |
> > + *                                 +--------+   |
> > + *                               2 |        |---'
> > + *                                 +--------+
> > + *                               3 |        |
> > + *                                 +--------+
> > + *
> > + * With this model, a single call binds all devices in a given domain to an
> > + * address space. Other devices in the domain will get the same bond implicitly.
> > + * However, users must issue one bind() for each device, because IOMMUs may
> > + * implement SVA differently. Furthermore, mandating one bind() per device
> > + * allows the driver to perform sanity-checks on device capabilities.
> 
> > + *
> > + * In some IOMMUs, one entry of the PASID table (typically the first one) can
> > + * hold non-PASID translations. In this case PASID 0 is reserved and the first
> > + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable
> > + * pointer is held in the device table and PASID 0 is available to the
> > + * allocator.
> 
> Is it worth hammering home in here that we can only do this because the PASID space
> is global (with exception of PASID 0)?  It's a convenient simplification but not
> necessarily a hardware restriction so perhaps we should remind people somewhere in here?

I could add this four paragraphs up:

"A single Process Address Space ID (PASID) is allocated for each mm. It is
a choice made for the Linux SVA implementation, not a hardware
restriction."

> > + */
> > +
> > +struct io_mm {
> > +	struct list_head		devices;
> > +	struct mm_struct		*mm;
> > +	struct mmu_notifier		notifier;
> > +
> > +	/* Late initialization */
> > +	const struct io_mm_ops		*ops;
> > +	void				*ctx;
> > +	int				pasid;
> > +};
> > +
> > +#define to_io_mm(mmu_notifier)	container_of(mmu_notifier, struct io_mm, notifier)
> > +#define to_iommu_bond(handle)	container_of(handle, struct iommu_bond, sva)
> 
> Code ordering wise, do we want this after the definition of iommu_bond?
> 
> For both of these it's a bit non obvious what they come 'from'.
> I wouldn't naturally assume to_io_mm gets me from notifier to the io_mm
> for example.  Not sure it matters though if these are only used in a few
> places.

Right, I can rename the first one to mn_to_io_mm(). The second one I think
might be good enough.


> > +static struct iommu_sva *
> > +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata)
> > +{
> > +	int ret = 0;
> 
> I'm fairly sure this is set in all paths below.  Now, of course the
> compiler might not think that in which case fair enough :)
> 
> > +	bool attach_domain = true;
> > +	struct iommu_bond *bond, *tmp;
> > +	struct iommu_domain *domain, *other;
> > +	struct iommu_sva_param *param = dev->iommu_param->sva_param;
> > +
> > +	domain = iommu_get_domain_for_dev(dev);
> > +
> > +	bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> > +	if (!bond)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	bond->sva.dev	= dev;
> > +	bond->drvdata	= drvdata;
> > +	refcount_set(&bond->refs, 1);
> > +	RCU_INIT_POINTER(bond->io_mm, io_mm);
> > +
> > +	mutex_lock(&iommu_sva_lock);
> > +	/* Is it already bound to the device or domain? */
> > +	list_for_each_entry(tmp, &io_mm->devices, mm_head) {
> > +		if (tmp->sva.dev != dev) {
> > +			other = iommu_get_domain_for_dev(tmp->sva.dev);
> > +			if (domain == other)
> > +				attach_domain = false;
> > +
> > +			continue;
> > +		}
> > +
> > +		if (WARN_ON(tmp->drvdata != drvdata)) {
> > +			ret = -EINVAL;
> > +			goto err_free;
> > +		}
> > +
> > +		/*
> > +		 * Hold a single io_mm reference per bond. Note that we can't
> > +		 * return an error after this, otherwise the caller would drop
> > +		 * an additional reference to the io_mm.
> > +		 */
> > +		refcount_inc(&tmp->refs);
> > +		io_mm_put(io_mm);
> > +		kfree(bond);
> 
> Free outside the lock would be ever so slightly more logical given we allocated
> before taking the lock.
> 
> > +		mutex_unlock(&iommu_sva_lock);
> > +		return &tmp->sva;
> > +	}
> > +
> > +	list_add_rcu(&bond->mm_head, &io_mm->devices);
> > +	param->nr_bonds++;
> > +	mutex_unlock(&iommu_sva_lock);
> > +
> > +	ret = io_mm->ops->attach(bond->sva.dev, io_mm->pasid, io_mm->ctx,
> > +				 attach_domain);
> > +	if (ret)
> > +		goto err_remove;
> > +
> > +	return &bond->sva;
> > +
> > +err_remove:
> > +	/*
> > +	 * At this point concurrent threads may have started to access the
> > +	 * io_mm->devices list in order to invalidate address ranges, which
> > +	 * requires to free the bond via kfree_rcu()
> > +	 */
> > +	mutex_lock(&iommu_sva_lock);
> > +	param->nr_bonds--;
> > +	list_del_rcu(&bond->mm_head);
> > +
> > +err_free:
> > +	mutex_unlock(&iommu_sva_lock);
> > +	kfree_rcu(bond, rcu_head);
> 
> I don't suppose it matters really but we don't need the rcu free if
> we follow the err_free goto.  Perhaps we are cleaner in this case
> to not use a unified exit path but do that case inline?

Agreed, though I moved the kzalloc() later as suggested by Jacob, I think
it looks a little better and simplifies the error paths

Thanks,
Jean

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-02-28 14:43 UTC|newest]

Thread overview: 210+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-24 18:23 [PATCH v4 00/26] iommu: Shared Virtual Addressing and SMMUv3 support Jean-Philippe Brucker
2020-02-24 18:23 ` Jean-Philippe Brucker
2020-02-24 18:23 ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 01/26] mm/mmu_notifiers: pass private data down to alloc_notifier() Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 19:00   ` Jason Gunthorpe
2020-02-24 19:00     ` Jason Gunthorpe
2020-02-24 19:00     ` Jason Gunthorpe
2020-02-25  9:24     ` Jean-Philippe Brucker
2020-02-25  9:24       ` Jean-Philippe Brucker
2020-02-25  9:24       ` Jean-Philippe Brucker
2020-02-25 14:08       ` Jason Gunthorpe
2020-02-25 14:08         ` Jason Gunthorpe
2020-02-25 14:08         ` Jason Gunthorpe
2020-02-28 14:39         ` Jean-Philippe Brucker
2020-02-28 14:39           ` Jean-Philippe Brucker
2020-02-28 14:39           ` Jean-Philippe Brucker
2020-02-28 14:48           ` Jason Gunthorpe
2020-02-28 14:48             ` Jason Gunthorpe
2020-02-28 14:48             ` Jason Gunthorpe
2020-02-28 15:04             ` Jean-Philippe Brucker
2020-02-28 15:04               ` Jean-Philippe Brucker
2020-02-28 15:04               ` Jean-Philippe Brucker
2020-02-28 15:13               ` Jason Gunthorpe
2020-02-28 15:13                 ` Jason Gunthorpe
2020-02-28 15:13                 ` Jason Gunthorpe
2020-03-06  9:56                 ` Jean-Philippe Brucker
2020-03-06  9:56                   ` Jean-Philippe Brucker
2020-03-06  9:56                   ` Jean-Philippe Brucker
2020-03-06 13:09                   ` Jason Gunthorpe
2020-03-06 13:09                     ` Jason Gunthorpe
2020-03-06 13:09                     ` Jason Gunthorpe
2020-03-06 14:35                     ` Jean-Philippe Brucker
2020-03-06 14:35                       ` Jean-Philippe Brucker
2020-03-06 14:35                       ` Jean-Philippe Brucker
2020-03-06 14:52                       ` Jason Gunthorpe
2020-03-06 14:52                         ` Jason Gunthorpe
2020-03-06 14:52                         ` Jason Gunthorpe
2020-03-06 16:15                         ` Jean-Philippe Brucker
2020-03-06 16:15                           ` Jean-Philippe Brucker
2020-03-06 16:15                           ` Jean-Philippe Brucker
2020-03-06 17:42                           ` Jason Gunthorpe
2020-03-06 17:42                             ` Jason Gunthorpe
2020-03-06 17:42                             ` Jason Gunthorpe
2020-03-13 18:49                             ` Jean-Philippe Brucker
2020-03-13 18:49                               ` Jean-Philippe Brucker
2020-03-13 18:49                               ` Jean-Philippe Brucker
2020-03-13 19:13                               ` Jason Gunthorpe
2020-03-13 19:13                                 ` Jason Gunthorpe
2020-03-13 19:13                                 ` Jason Gunthorpe
2020-03-16 15:46                     ` Christoph Hellwig
2020-03-16 15:46                       ` Christoph Hellwig
2020-03-16 15:46                       ` Christoph Hellwig
2020-03-17 18:40                       ` Jason Gunthorpe
2020-03-17 18:40                         ` Jason Gunthorpe
2020-03-17 18:40                         ` Jason Gunthorpe
2020-03-05 16:36   ` Christoph Hellwig
2020-03-05 16:36     ` Christoph Hellwig
2020-03-05 16:36     ` Christoph Hellwig
2020-02-24 18:23 ` [PATCH v4 02/26] iommu/sva: Manage process address spaces Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-26 12:35   ` Jonathan Cameron
2020-02-26 12:35     ` Jonathan Cameron
2020-02-26 12:35     ` Jonathan Cameron
2020-02-28 14:43     ` Jean-Philippe Brucker [this message]
2020-02-28 14:43       ` Jean-Philippe Brucker
2020-02-28 14:43       ` Jean-Philippe Brucker
2020-02-28 16:26       ` Jonathan Cameron
2020-02-28 16:26         ` Jonathan Cameron
2020-02-28 16:26         ` Jonathan Cameron
2020-02-26 19:13   ` Jacob Pan
2020-02-26 19:13     ` Jacob Pan
2020-02-26 19:13     ` Jacob Pan
2020-02-28 14:40     ` Jean-Philippe Brucker
2020-02-28 14:40       ` Jean-Philippe Brucker
2020-02-28 14:40       ` Jean-Philippe Brucker
2020-02-28 14:57       ` Jason Gunthorpe
2020-02-28 14:57         ` Jason Gunthorpe
2020-02-28 14:57         ` Jason Gunthorpe
2020-02-24 18:23 ` [PATCH v4 03/26] iommu: Add a page fault handler Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-25  3:30   ` Xu Zaibo
2020-02-25  3:30     ` Xu Zaibo
2020-02-25  3:30     ` Xu Zaibo
2020-02-25  9:25     ` Jean-Philippe Brucker
2020-02-25  9:25       ` Jean-Philippe Brucker
2020-02-25  9:25       ` Jean-Philippe Brucker
2020-02-26  3:05       ` Xu Zaibo
2020-02-26  3:05         ` Xu Zaibo
2020-02-26  3:05         ` Xu Zaibo
2020-02-26 13:59   ` Jonathan Cameron
2020-02-26 13:59     ` Jonathan Cameron
2020-02-26 13:59     ` Jonathan Cameron
2020-02-28 14:44     ` Jean-Philippe Brucker
2020-02-28 14:44       ` Jean-Philippe Brucker
2020-02-28 14:44       ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 04/26] iommu/sva: Search mm by PASID Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 05/26] iommu/iopf: Handle mm faults Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 06/26] iommu/sva: Register page fault handler Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-26 19:39   ` Jacob Pan
2020-02-26 19:39     ` Jacob Pan
2020-02-26 19:39     ` Jacob Pan
2020-02-28 14:44     ` Jean-Philippe Brucker
2020-02-28 14:44       ` Jean-Philippe Brucker
2020-02-28 14:44       ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 07/26] arm64: mm: Pin down ASIDs for sharing mm with devices Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-27 17:43   ` Jonathan Cameron
2020-02-27 17:43     ` Jonathan Cameron
2020-02-27 17:43     ` Jonathan Cameron
2020-03-04 14:10     ` Jean-Philippe Brucker
2020-03-04 14:10       ` Jean-Philippe Brucker
2020-03-04 14:10       ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 08/26] iommu/io-pgtable-arm: Move some definitions to a header Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 09/26] iommu/arm-smmu-v3: Manage ASIDs with xarray Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 10/26] arm64: cpufeature: Export symbol read_sanitised_ftr_reg() Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 11/26] iommu/arm-smmu-v3: Share process page tables Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 12/26] iommu/arm-smmu-v3: Seize private ASID Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 13/26] iommu/arm-smmu-v3: Add support for VHE Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 14/26] iommu/arm-smmu-v3: Enable broadcast TLB maintenance Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 15/26] iommu/arm-smmu-v3: Add SVA feature checking Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 16/26] iommu/arm-smmu-v3: Add dev_to_master() helper Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 17/26] iommu/arm-smmu-v3: Implement mm operations Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 18/26] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 19/26] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 20/26] iommu/arm-smmu-v3: Maintain a SID->device structure Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 21/26] iommu/arm-smmu-v3: Ratelimit event dump Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2021-05-28  8:09   ` Aaro Koskinen
2021-05-28  8:09     ` Aaro Koskinen
2021-05-28  8:09     ` Aaro Koskinen
2021-05-28 16:25     ` Jean-Philippe Brucker
2021-05-28 16:25       ` Jean-Philippe Brucker
2021-05-28 16:25       ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 22/26] dt-bindings: document stall property for IOMMU masters Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23 ` [PATCH v4 23/26] iommu/arm-smmu-v3: Add stall support for platform devices Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-26  8:44   ` Xu Zaibo
2020-02-26  8:44     ` Xu Zaibo
2020-02-26  8:44     ` Xu Zaibo
2020-03-04 14:09     ` Jean-Philippe Brucker
2020-03-04 14:09       ` Jean-Philippe Brucker
2020-03-04 14:09       ` Jean-Philippe Brucker
2020-02-27 18:17   ` Jonathan Cameron
2020-02-27 18:17     ` Jonathan Cameron
2020-02-27 18:17     ` Jonathan Cameron
2020-03-04 14:08     ` Jean-Philippe Brucker
2020-03-04 14:08       ` Jean-Philippe Brucker
2020-03-04 14:08       ` Jean-Philippe Brucker
2020-03-09 10:48       ` Jonathan Cameron
2020-03-09 10:48         ` Jonathan Cameron
2020-03-09 10:48         ` Jonathan Cameron
2020-02-24 18:23 ` [PATCH v4 24/26] PCI/ATS: Add PRI stubs Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-24 18:23   ` Jean-Philippe Brucker
2020-02-27 20:55   ` Bjorn Helgaas
2020-02-27 20:55     ` Bjorn Helgaas
2020-02-27 20:55     ` Bjorn Helgaas
2020-02-24 18:24 ` [PATCH v4 25/26] PCI/ATS: Export symbols of PRI functions Jean-Philippe Brucker
2020-02-24 18:24   ` Jean-Philippe Brucker
2020-02-24 18:24   ` Jean-Philippe Brucker
2020-02-27 20:55   ` Bjorn Helgaas
2020-02-27 20:55     ` Bjorn Helgaas
2020-02-27 20:55     ` Bjorn Helgaas
2020-02-24 18:24 ` [PATCH v4 26/26] iommu/arm-smmu-v3: Add support for PRI Jean-Philippe Brucker
2020-02-24 18:24   ` Jean-Philippe Brucker
2020-02-24 18:24   ` Jean-Philippe Brucker
2020-02-27 18:22 ` [PATCH v4 00/26] iommu: Shared Virtual Addressing and SMMUv3 support Jonathan Cameron
2020-02-27 18:22   ` Jonathan Cameron
2020-02-27 18:22   ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200228144304.GC2156@myrica \
    --to=jean-philippe@linaro.org \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=catalin.marinas@arm.com \
    --cc=christian.koenig@amd.com \
    --cc=devicetree@vger.kernel.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jean-philippe.brucker@arm.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=robh+dt@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    --cc=yi.l.liu@intel.com \
    --cc=zhangfei.gao@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.