All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe via iommu <iommu@lists.linux-foundation.org>
To: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	Ashok Raj <ashok.raj@intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-kernel@vger.kernel.org,
	Christoph Hellwig <hch@infradead.org>,
	iommu@lists.linux-foundation.org,
	Joao Martins <joao.m.martins@oracle.com>,
	Will Deacon <will@kernel.org>
Subject: Re: [RFC PATCHES 1/2] iommu: Add RCU-protected page free support
Date: Thu, 9 Jun 2022 09:49:34 -0300	[thread overview]
Message-ID: <20220609124934.GZ1343366@nvidia.com> (raw)
In-Reply-To: <20220609070811.902868-1-baolu.lu@linux.intel.com>

On Thu, Jun 09, 2022 at 03:08:10PM +0800, Lu Baolu wrote:
> The IOMMU page tables are updated using iommu_map/unmap() interfaces.
> Currently, there is no mandatory requirement for drivers to use locks
> to ensure concurrent updates to page tables, because it's assumed that
> overlapping IOVA ranges do not have concurrent updates. Therefore the
> IOMMU drivers only need to take care of concurrent updates to level
> page table entries.
> 
> But enabling new features challenges this assumption. For example, the
> hardware assisted dirty page tracking feature requires scanning page
> tables in interfaces other than mapping and unmapping. This might result
> in a use-after-free scenario in which a level page table has been freed
> by the unmap() interface, while another thread is scanning the next level
> page table.
> 
> This adds RCU-protected page free support so that the pages are really
> freed and reused after a RCU grace period. Hence, the page tables are
> safe for scanning within a rcu_read_lock critical region. Considering
> that scanning the page table is a rare case, this also adds a domain
> flag and the RCU-protected page free is only used when this flat is set.
> 
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> ---
>  include/linux/iommu.h |  9 +++++++++
>  drivers/iommu/iommu.c | 23 +++++++++++++++++++++++
>  2 files changed, 32 insertions(+)
> 
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 5e1afe169549..6f68eabb8567 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -95,6 +95,7 @@ struct iommu_domain {
>  	void *handler_token;
>  	struct iommu_domain_geometry geometry;
>  	struct iommu_dma_cookie *iova_cookie;
> +	unsigned long concurrent_traversal:1;
>  };
>  
>  static inline bool iommu_is_dma_domain(struct iommu_domain *domain)
> @@ -657,6 +658,12 @@ static inline void dev_iommu_priv_set(struct device *dev, void *priv)
>  	dev->iommu->priv = priv;
>  }
>  
> +static inline void domain_set_concurrent_traversal(struct iommu_domain *domain,
> +						   bool value)
> +{
> +	domain->concurrent_traversal = value;
> +}

?? If you want it to be a driver opt in I would just add a flags to
the domain ops. "DOMAIN_FLAG_RCU_FREE_PAGES"

> +void iommu_free_pgtbl_pages(struct iommu_domain *domain,
> +			    struct list_head *pages)
> +{
> +	struct page *page, *next;
> +
> +	if (!domain->concurrent_traversal) {
> +		put_pages_list(pages);
> +		return;
> +	}
> +
> +	list_for_each_entry_safe(page, next, pages, lru) {
> +		list_del(&page->lru);
> +		call_rcu(&page->rcu_head, pgtble_page_free_rcu);
> +	}

It seems OK, but I wonder if there is benifit to using
put_pages_list() from the rcu callback

Jason
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com>
To: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Joerg Roedel <joro@8bytes.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Ashok Raj <ashok.raj@intel.com>,
	Christoph Hellwig <hch@infradead.org>,
	Will Deacon <will@kernel.org>,
	Joao Martins <joao.m.martins@oracle.com>,
	iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCHES 1/2] iommu: Add RCU-protected page free support
Date: Thu, 9 Jun 2022 09:49:34 -0300	[thread overview]
Message-ID: <20220609124934.GZ1343366@nvidia.com> (raw)
In-Reply-To: <20220609070811.902868-1-baolu.lu@linux.intel.com>

On Thu, Jun 09, 2022 at 03:08:10PM +0800, Lu Baolu wrote:
> The IOMMU page tables are updated using iommu_map/unmap() interfaces.
> Currently, there is no mandatory requirement for drivers to use locks
> to ensure concurrent updates to page tables, because it's assumed that
> overlapping IOVA ranges do not have concurrent updates. Therefore the
> IOMMU drivers only need to take care of concurrent updates to level
> page table entries.
> 
> But enabling new features challenges this assumption. For example, the
> hardware assisted dirty page tracking feature requires scanning page
> tables in interfaces other than mapping and unmapping. This might result
> in a use-after-free scenario in which a level page table has been freed
> by the unmap() interface, while another thread is scanning the next level
> page table.
> 
> This adds RCU-protected page free support so that the pages are really
> freed and reused after a RCU grace period. Hence, the page tables are
> safe for scanning within a rcu_read_lock critical region. Considering
> that scanning the page table is a rare case, this also adds a domain
> flag and the RCU-protected page free is only used when this flat is set.
> 
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> ---
>  include/linux/iommu.h |  9 +++++++++
>  drivers/iommu/iommu.c | 23 +++++++++++++++++++++++
>  2 files changed, 32 insertions(+)
> 
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 5e1afe169549..6f68eabb8567 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -95,6 +95,7 @@ struct iommu_domain {
>  	void *handler_token;
>  	struct iommu_domain_geometry geometry;
>  	struct iommu_dma_cookie *iova_cookie;
> +	unsigned long concurrent_traversal:1;
>  };
>  
>  static inline bool iommu_is_dma_domain(struct iommu_domain *domain)
> @@ -657,6 +658,12 @@ static inline void dev_iommu_priv_set(struct device *dev, void *priv)
>  	dev->iommu->priv = priv;
>  }
>  
> +static inline void domain_set_concurrent_traversal(struct iommu_domain *domain,
> +						   bool value)
> +{
> +	domain->concurrent_traversal = value;
> +}

?? If you want it to be a driver opt in I would just add a flags to
the domain ops. "DOMAIN_FLAG_RCU_FREE_PAGES"

> +void iommu_free_pgtbl_pages(struct iommu_domain *domain,
> +			    struct list_head *pages)
> +{
> +	struct page *page, *next;
> +
> +	if (!domain->concurrent_traversal) {
> +		put_pages_list(pages);
> +		return;
> +	}
> +
> +	list_for_each_entry_safe(page, next, pages, lru) {
> +		list_del(&page->lru);
> +		call_rcu(&page->rcu_head, pgtble_page_free_rcu);
> +	}

It seems OK, but I wonder if there is benifit to using
put_pages_list() from the rcu callback

Jason

  parent reply	other threads:[~2022-06-09 12:49 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09  7:08 [RFC PATCHES 1/2] iommu: Add RCU-protected page free support Lu Baolu
2022-06-09  7:08 ` Lu Baolu
2022-06-09  7:08 ` [RFC PATCHES 2/2] iommu: Replace put_pages_list() with iommu_free_pgtbl_pages() Lu Baolu
2022-06-09  7:08   ` Lu Baolu
2022-06-09 12:49 ` Jason Gunthorpe via iommu [this message]
2022-06-09 12:49   ` [RFC PATCHES 1/2] iommu: Add RCU-protected page free support Jason Gunthorpe
2022-06-09 13:19   ` Robin Murphy
2022-06-09 13:19     ` Robin Murphy
2022-06-09 13:32     ` Jason Gunthorpe
2022-06-09 13:32       ` Jason Gunthorpe via iommu
2022-06-10  5:59       ` Baolu Lu
2022-06-10  5:59         ` Baolu Lu
2022-06-10  5:37   ` Baolu Lu
2022-06-10  5:37     ` Baolu Lu
2022-06-15 15:40     ` Jason Gunthorpe
2022-06-15 15:40       ` Jason Gunthorpe via iommu
2022-06-16  2:27       ` Baolu Lu
2022-06-16  2:27         ` Baolu Lu
2022-06-20  4:04         ` Jason Gunthorpe
2022-06-20  4:04           ` Jason Gunthorpe via iommu
2022-06-09 17:06 ` Raj, Ashok
2022-06-09 17:06   ` Raj, Ashok
2022-06-10  6:05   ` Baolu Lu
2022-06-10  6:05     ` Baolu Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220609124934.GZ1343366@nvidia.com \
    --to=iommu@lists.linux-foundation.org \
    --cc=ashok.raj@intel.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=hch@infradead.org \
    --cc=jgg@nvidia.com \
    --cc=joao.m.martins@oracle.com \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.