iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Sironi, Filippo via iommu" <iommu@lists.linux-foundation.org>
To: Joerg Roedel <joro@8bytes.org>
Cc: "iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"jroedel@suse.de" <jroedel@suse.de>
Subject: Re: [PATCH 6/6] iommu/amd: Lock code paths traversing protection_domain->dev_list
Date: Wed, 25 Sep 2019 15:58:03 +0000	[thread overview]
Message-ID: <E176CEB2-E9E0-4CFA-9F43-E3E0488598E8@amazon.de> (raw)
In-Reply-To: <20190925132300.3038-7-joro@8bytes.org>



> On 25. Sep 2019, at 06:23, Joerg Roedel <joro@8bytes.org> wrote:
> 
> From: Joerg Roedel <jroedel@suse.de>
> 
> The traversing of this list requires protection_domain->lock to be taken
> to avoid nasty races with attach/detach code. Make sure the lock is held
> on all code-paths traversing this list.
> 
> Reported-by: Filippo Sironi <sironi@amazon.de>
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
> drivers/iommu/amd_iommu.c | 25 ++++++++++++++++++++++++-
> 1 file changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index bac4e20a5919..9c26976a0f99 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -1334,8 +1334,12 @@ static void domain_flush_np_cache(struct protection_domain *domain,
> 		dma_addr_t iova, size_t size)
> {
> 	if (unlikely(amd_iommu_np_cache)) {
> +		unsigned long flags;
> +
> +		spin_lock_irqsave(&domain->lock, flags);
> 		domain_flush_pages(domain, iova, size);
> 		domain_flush_complete(domain);
> +		spin_unlock_irqrestore(&domain->lock, flags);
> 	}
> }
> 
> @@ -1700,8 +1704,13 @@ static int iommu_map_page(struct protection_domain *dom,
> 	ret = 0;
> 
> out:
> -	if (updated)
> +	if (updated) {
> +		unsigned long flags;
> +
> +		spin_lock_irqsave(&dom->lock, flags);
> 		update_domain(dom);
> +		spin_unlock_irqrestore(&dom->lock, flags);
> +	}
> 
> 	/* Everything flushed out, free pages now */
> 	free_page_list(freelist);
> @@ -1857,8 +1866,12 @@ static void free_gcr3_table(struct protection_domain *domain)
> 
> static void dma_ops_domain_flush_tlb(struct dma_ops_domain *dom)
> {
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&dom->domain.lock, flags);
> 	domain_flush_tlb(&dom->domain);
> 	domain_flush_complete(&dom->domain);
> +	spin_unlock_irqrestore(&dom->domain.lock, flags);
> }
> 
> static void iova_domain_flush_tlb(struct iova_domain *iovad)
> @@ -2414,6 +2427,7 @@ static dma_addr_t __map_single(struct device *dev,
> {
> 	dma_addr_t offset = paddr & ~PAGE_MASK;
> 	dma_addr_t address, start, ret;
> +	unsigned long flags;
> 	unsigned int pages;
> 	int prot = 0;
> 	int i;
> @@ -2451,8 +2465,10 @@ static dma_addr_t __map_single(struct device *dev,
> 		iommu_unmap_page(&dma_dom->domain, start, PAGE_SIZE);
> 	}
> 
> +	spin_lock_irqsave(&dma_dom->domain.lock, flags);
> 	domain_flush_tlb(&dma_dom->domain);
> 	domain_flush_complete(&dma_dom->domain);
> +	spin_unlock_irqrestore(&dma_dom->domain.lock, flags);
> 
> 	dma_ops_free_iova(dma_dom, address, pages);
> 
> @@ -2481,8 +2497,12 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
> 	}
> 
> 	if (amd_iommu_unmap_flush) {
> +		unsigned long flags;
> +
> +		spin_lock_irqsave(&dma_dom->domain.lock, flags);
> 		domain_flush_tlb(&dma_dom->domain);
> 		domain_flush_complete(&dma_dom->domain);
> +		spin_unlock_irqrestore(&dma_dom->domain.lock, flags);
> 		dma_ops_free_iova(dma_dom, dma_addr, pages);
> 	} else {
> 		pages = __roundup_pow_of_two(pages);
> @@ -3246,9 +3266,12 @@ static bool amd_iommu_is_attach_deferred(struct iommu_domain *domain,
> static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
> {
> 	struct protection_domain *dom = to_pdomain(domain);
> +	unsigned long flags;
> 
> +	spin_lock_irqsave(&dom->lock, flags);
> 	domain_flush_tlb_pde(dom);
> 	domain_flush_complete(dom);
> +	spin_unlock_irqrestore(&dom->lock, flags);
> }
> 
> static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
> -- 
> 2.17.1
> 

Reviewed-by: Filippo Sironi <sironi@amazon.de>



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2019-09-25 15:58 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-25 13:22 [PATCH 0/6] iommu/amd: Locking Fixes Joerg Roedel
2019-09-25 13:22 ` [PATCH 1/6] iommu/amd: Remove domain->updated Joerg Roedel
2019-09-25 15:45   ` Sironi, Filippo via iommu
2019-09-26  6:18   ` Jerry Snitselaar
2019-09-25 13:22 ` [PATCH 2/6] iommu/amd: Remove amd_iommu_devtable_lock Joerg Roedel
2019-09-25 15:50   ` Sironi, Filippo via iommu
2019-09-25 15:52     ` Sironi, Filippo via iommu
2019-09-26  6:24   ` Jerry Snitselaar
2019-09-25 13:22 ` [PATCH 3/6] iommu/amd: Take domain->lock for complete attach/detach path Joerg Roedel
2019-09-25 15:53   ` Sironi, Filippo via iommu
2019-09-26  6:34   ` Jerry Snitselaar
2019-09-25 13:22 ` [PATCH 4/6] iommu/amd: Check for busy devices earlier in attach_device() Joerg Roedel
2019-09-25 15:55   ` Sironi, Filippo via iommu
2019-09-26  6:37   ` Jerry Snitselaar
2019-09-25 13:22 ` [PATCH 5/6] iommu/amd: Lock dev_data in attach/detach code paths Joerg Roedel
2019-09-25 15:56   ` Sironi, Filippo via iommu
2019-09-26  6:41   ` Jerry Snitselaar
2019-09-25 13:23 ` [PATCH 6/6] iommu/amd: Lock code paths traversing protection_domain->dev_list Joerg Roedel
2019-09-25 15:58   ` Sironi, Filippo via iommu [this message]
2019-09-26  6:48   ` Jerry Snitselaar
2019-09-26  0:25 ` [PATCH 0/6] iommu/amd: Locking Fixes Jerry Snitselaar
2019-09-26  5:46   ` Joerg Roedel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E176CEB2-E9E0-4CFA-9F43-E3E0488598E8@amazon.de \
    --to=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=jroedel@suse.de \
    --cc=sironi@amazon.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).