iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Jean-Philippe Brucker <jean-philippe@linaro.org>,
	iommu@lists.linux-foundation.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org,
	linux-mm@kvack.org
Cc: fenghua.yu@intel.com, kevin.tian@intel.com, jgg@ziepe.ca,
	catalin.marinas@arm.com, robin.murphy@arm.com, hch@infradead.org,
	zhangfei.gao@linaro.org, felix.kuehling@amd.com, will@kernel.org,
	christian.koenig@amd.com
Subject: Re: [PATCH v6 04/25] iommu: Add a page fault handler
Date: Sun, 3 May 2020 13:49:01 +0800	[thread overview]
Message-ID: <9a8ec004-0a9c-d772-8e7a-f839002a40b5@linux.intel.com> (raw)
In-Reply-To: <20200430143424.2787566-5-jean-philippe@linaro.org>

Hi Jean,

On 2020/4/30 22:34, Jean-Philippe Brucker wrote:
> Some systems allow devices to handle I/O Page Faults in the core mm. For
> example systems implementing the PCIe PRI extension or Arm SMMU stall
> model. Infrastructure for reporting these recoverable page faults was
> added to the IOMMU core by commit 0c830e6b3282 ("iommu: Introduce device
> fault report API"). Add a page fault handler for host SVA.
> 
> IOMMU driver can now instantiate several fault workqueues and link them
> to IOPF-capable devices. Drivers can choose between a single global
> workqueue, one per IOMMU device, one per low-level fault queue, one per
> domain, etc.
> 
> When it receives a fault event, supposedly in an IRQ handler, the IOMMU
> driver reports the fault using iommu_report_device_fault(), which calls
> the registered handler. The page fault handler then calls the mm fault
> handler, and reports either success or failure with iommu_page_response().
> When the handler succeeded, the IOMMU retries the access.
> 
> The iopf_param pointer could be embedded into iommu_fault_param. But
> putting iopf_param into the iommu_param structure allows us not to care
> about ordering between calls to iopf_queue_add_device() and
> iommu_register_device_fault_handler().
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v5->v6: Simplify flush. As we're not flushing in the mm exit path
>    anymore, we can mandate that IOMMU drivers flush their low-level queue
>    themselves before calling iopf_queue_flush_dev(). No need to register
>    a flush callback anymore.
> ---
>   drivers/iommu/Kconfig      |   3 +
>   drivers/iommu/Makefile     |   1 +
>   include/linux/iommu.h      |  51 +++++
>   drivers/iommu/io-pgfault.c | 383 +++++++++++++++++++++++++++++++++++++
>   4 files changed, 438 insertions(+)
>   create mode 100644 drivers/iommu/io-pgfault.c
> 

[...]

> +
> +static void iopf_handle_group(struct work_struct *work)
> +{
> +	struct iopf_group *group;
> +	struct iopf_fault *iopf, *next;
> +	enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
> +
> +	group = container_of(work, struct iopf_group, work);
> +
> +	list_for_each_entry_safe(iopf, next, &group->faults, head) {
> +		/*
> +		 * For the moment, errors are sticky: don't handle subsequent
> +		 * faults in the group if there is an error.
> +		 */
> +		if (status == IOMMU_PAGE_RESP_SUCCESS)
> +			status = iopf_handle_single(iopf);
> +
> +		if (!(iopf->fault.prm.flags &
> +		      IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))
> +			kfree(iopf);

The iopf is freed,but not removed from the list. This will cause wild
pointer in code.

> +	}
> +
> +	iopf_complete_group(group->dev, &group->last_fault, status);
> +	kfree(group);
> +}
> +

[...]

> +/**
> + * iopf_queue_flush_dev - Ensure that all queued faults have been processed
> + * @dev: the endpoint whose faults need to be flushed.
> + * @pasid: the PASID affected by this flush
> + *
> + * The IOMMU driver calls this before releasing a PASID, to ensure that all
> + * pending faults for this PASID have been handled, and won't hit the address
> + * space of the next process that uses this PASID. The driver must make sure
> + * that no new fault is added to the queue. In particular it must flush its
> + * low-level queue before calling this function.
> + *
> + * Return: 0 on success and <0 on error.
> + */
> +int iopf_queue_flush_dev(struct device *dev, int pasid)
> +{
> +	int ret = 0;
> +	struct iopf_device_param *iopf_param;
> +	struct dev_iommu *param = dev->iommu;
> +
> +	if (!param)
> +		return -ENODEV;
> +
> +	mutex_lock(&param->lock);
> +	iopf_param = param->iopf_param;
> +	if (iopf_param)
> +		flush_workqueue(iopf_param->queue->wq);

There may be other pasid iopf in the workqueue. Flush all tasks in
the workqueue will hurt other pasids. I might lose any context.

> +	else
> +		ret = -ENODEV;
> +	mutex_unlock(&param->lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(iopf_queue_flush_dev);
> +
> +/**
> + * iopf_queue_discard_partial - Remove all pending partial fault
> + * @queue: the queue whose partial faults need to be discarded
> + *
> + * When the hardware queue overflows, last page faults in a group may have been
> + * lost and the IOMMU driver calls this to discard all partial faults. The
> + * driver shouldn't be adding new faults to this queue concurrently.
> + *
> + * Return: 0 on success and <0 on error.
> + */
> +int iopf_queue_discard_partial(struct iopf_queue *queue)
> +{
> +	struct iopf_fault *iopf, *next;
> +	struct iopf_device_param *iopf_param;
> +
> +	if (!queue)
> +		return -EINVAL;
> +
> +	mutex_lock(&queue->lock);
> +	list_for_each_entry(iopf_param, &queue->devices, queue_list) {
> +		list_for_each_entry_safe(iopf, next, &iopf_param->partial, head)
> +			kfree(iopf);

iopf is freed but not removed from the list.

> +	}
> +	mutex_unlock(&queue->lock);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iopf_queue_discard_partial);
> +
> +/**
> + * iopf_queue_add_device - Add producer to the fault queue
> + * @queue: IOPF queue
> + * @dev: device to add
> + *
> + * Return: 0 on success and <0 on error.
> + */
> +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev)
> +{
> +	int ret = -EBUSY;
> +	struct iopf_device_param *iopf_param;
> +	struct dev_iommu *param = dev->iommu;
> +
> +	if (!param)
> +		return -ENODEV;
> +
> +	iopf_param = kzalloc(sizeof(*iopf_param), GFP_KERNEL);
> +	if (!iopf_param)
> +		return -ENOMEM;
> +
> +	INIT_LIST_HEAD(&iopf_param->partial);
> +	iopf_param->queue = queue;
> +	iopf_param->dev = dev;
> +
> +	mutex_lock(&queue->lock);
> +	mutex_lock(&param->lock);
> +	if (!param->iopf_param) {
> +		list_add(&iopf_param->queue_list, &queue->devices);
> +		param->iopf_param = iopf_param;
> +		ret = 0;
> +	}
> +	mutex_unlock(&param->lock);
> +	mutex_unlock(&queue->lock);
> +
> +	if (ret)
> +		kfree(iopf_param);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(iopf_queue_add_device);
> +
> +/**
> + * iopf_queue_remove_device - Remove producer from fault queue
> + * @queue: IOPF queue
> + * @dev: device to remove
> + *
> + * Caller makes sure that no more faults are reported for this device.
> + *
> + * Return: 0 on success and <0 on error.
> + */
> +int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev)
> +{
> +	int ret = 0;
> +	struct iopf_fault *iopf, *next;
> +	struct iopf_device_param *iopf_param;
> +	struct dev_iommu *param = dev->iommu;
> +
> +	if (!param || !queue)
> +		return -EINVAL;
> +
> +	mutex_lock(&queue->lock);
> +	mutex_lock(&param->lock);
> +	iopf_param = param->iopf_param;
> +	if (iopf_param && iopf_param->queue == queue) {
> +		list_del(&iopf_param->queue_list);
> +		param->iopf_param = NULL;
> +	} else {
> +		ret = -EINVAL;
> +	}
> +	mutex_unlock(&param->lock);
> +	mutex_unlock(&queue->lock);
> +	if (ret)
> +		return ret;
> +
> +	/* Just in case some faults are still stuck */
> +	list_for_each_entry_safe(iopf, next, &iopf_param->partial, head)
> +		kfree(iopf);

The same here.

> +
> +	kfree(iopf_param);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(iopf_queue_remove_device);
> +
> +/**
> + * iopf_queue_alloc - Allocate and initialize a fault queue
> + * @name: a unique string identifying the queue (for workqueue)
> + *
> + * Return: the queue on success and NULL on error.
> + */
> +struct iopf_queue *iopf_queue_alloc(const char *name)
> +{
> +	struct iopf_queue *queue;
> +
> +	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> +	if (!queue)
> +		return NULL;
> +
> +	/*
> +	 * The WQ is unordered because the low-level handler enqueues faults by
> +	 * group. PRI requests within a group have to be ordered, but once
> +	 * that's dealt with, the high-level function can handle groups out of
> +	 * order.
> +	 */
> +	queue->wq = alloc_workqueue("iopf_queue/%s", WQ_UNBOUND, 0, name);
> +	if (!queue->wq) {
> +		kfree(queue);
> +		return NULL;
> +	}
> +
> +	INIT_LIST_HEAD(&queue->devices);
> +	mutex_init(&queue->lock);
> +
> +	return queue;
> +}
> +EXPORT_SYMBOL_GPL(iopf_queue_alloc);
> +
> +/**
> + * iopf_queue_free - Free IOPF queue
> + * @queue: queue to free
> + *
> + * Counterpart to iopf_queue_alloc(). The driver must not be queuing faults or
> + * adding/removing devices on this queue anymore.
> + */
> +void iopf_queue_free(struct iopf_queue *queue)
> +{
> +	struct iopf_device_param *iopf_param, *next;
> +
> +	if (!queue)
> +		return;
> +
> +	list_for_each_entry_safe(iopf_param, next, &queue->devices, queue_list)
> +		iopf_queue_remove_device(queue, iopf_param->dev);
> +
> +	destroy_workqueue(queue->wq);
> +	kfree(queue);
> +}
> +EXPORT_SYMBOL_GPL(iopf_queue_free);
> 

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2020-05-03  5:49 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-30 14:33 [PATCH v6 00/25] iommu: Shared Virtual Addressing for SMMUv3 Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 01/25] mm: Add a PASID field to mm_struct Jean-Philippe Brucker
2020-05-04  1:52   ` Xu Zaibo
2020-05-04 16:29     ` Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 02/25] iommu/ioasid: Add ioasid references Jean-Philippe Brucker
2020-04-30 18:39   ` Jacob Pan
2020-04-30 20:48     ` Jacob Pan
2020-05-04 14:39       ` Jean-Philippe Brucker
2020-05-04 15:22         ` Jacob Pan
2020-05-04 14:25     ` Jean-Philippe Brucker
2020-05-04 15:27       ` Jacob Pan
2020-04-30 14:34 ` [PATCH v6 03/25] iommu/sva: Add PASID helpers Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 04/25] iommu: Add a page fault handler Jean-Philippe Brucker
2020-05-03  5:49   ` Lu Baolu [this message]
2020-05-04 16:22     ` Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 05/25] iommu/iopf: Handle mm faults Jean-Philippe Brucker
2020-05-03  5:54   ` Lu Baolu
2020-05-04 16:25     ` Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 06/25] arm64: mm: Add asid_gen_match() helper Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 07/25] arm64: mm: Pin down ASIDs for sharing mm with devices Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 08/25] iommu/io-pgtable-arm: Move some definitions to a header Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 09/25] iommu/arm-smmu-v3: Manage ASIDs with xarray Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 10/25] arm64: cpufeature: Export symbol read_sanitised_ftr_reg() Jean-Philippe Brucker
2020-04-30 15:18   ` Suzuki K Poulose
2020-04-30 14:34 ` [PATCH v6 11/25] iommu/arm-smmu-v3: Share process page tables Jean-Philippe Brucker
2020-04-30 15:39   ` Suzuki K Poulose
2020-05-04 14:11     ` Jean-Philippe Brucker
2020-05-04 14:42       ` Suzuki K Poulose
2020-04-30 14:34 ` [PATCH v6 12/25] iommu/arm-smmu-v3: Seize private ASID Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 13/25] iommu/arm-smmu-v3: Add support for VHE Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 14/25] iommu/arm-smmu-v3: Enable broadcast TLB maintenance Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 15/25] iommu/arm-smmu-v3: Add SVA feature checking Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 16/25] iommu/arm-smmu-v3: Add SVA device feature Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 17/25] iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind() Jean-Philippe Brucker
2020-04-30 21:16   ` Jacob Pan
2020-05-04 16:43     ` Jean-Philippe Brucker
2020-05-04 20:47       ` Jacob Pan
2020-05-05  9:15         ` Jean-Philippe Brucker
2020-05-07 16:31           ` Jacob Pan
2020-05-01 12:15   ` Christoph Hellwig
2020-05-01 12:55     ` Jason Gunthorpe
2020-05-04 16:07       ` Jean-Philippe Brucker
2020-05-04 16:06     ` Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 18/25] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 19/25] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update Jean-Philippe Brucker
2020-05-04 14:24   ` Prabhakar Kushwaha
2020-05-04 16:35     ` Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 20/25] iommu/arm-smmu-v3: Maintain a SID->device structure Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 21/25] dt-bindings: document stall property for IOMMU masters Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 22/25] iommu/arm-smmu-v3: Add stall support for platform devices Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 23/25] PCI/ATS: Add PRI stubs Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 24/25] PCI/ATS: Export PRI functions Jean-Philippe Brucker
2020-04-30 14:34 ` [PATCH v6 25/25] iommu/arm-smmu-v3: Add support for PRI Jean-Philippe Brucker
2020-04-30 21:18 ` [PATCH v6 00/25] iommu: Shared Virtual Addressing for SMMUv3 Jacob Pan
2020-05-04 15:09   ` Jean-Philippe Brucker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9a8ec004-0a9c-d772-8e7a-f839002a40b5@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=catalin.marinas@arm.com \
    --cc=christian.koenig@amd.com \
    --cc=devicetree@vger.kernel.org \
    --cc=felix.kuehling@amd.com \
    --cc=fenghua.yu@intel.com \
    --cc=hch@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@ziepe.ca \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    --cc=zhangfei.gao@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).