xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, iwj@xenproject.org,
	Julien Grall <jgrall@amazon.com>, Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU page-table allocator
Date: Wed, 17 Feb 2021 16:29:43 +0000	[thread overview]
Message-ID: <51618338-daff-5b9a-5214-e0788d95992b@xen.org> (raw)
In-Reply-To: <cf303aee-3d89-5608-f358-6bef5c32d706@suse.com>

Hi Jan,

On 17/02/2021 15:13, Jan Beulich wrote:
> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)>  >  void arch_iommu_domain_destroy(struct domain *d)>  {> +    /*> +     * There should be not page-tables left allocated by the time the
> Nit: s/not/no/ ?
> 
>> +     * domain is destroyed. Note that arch_iommu_domain_destroy() is
>> +     * called unconditionally, so pgtables may be unitialized.
>> +     */
>> +    ASSERT(dom_iommu(d)->platform_ops == NULL ||
>> +           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
>>   }
>>   
>>   static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d)
>>        */
>>       hd->platform_ops->clear_root_pgtable(d);
>>   
>> +    /* After this barrier no new page allocations can occur. */
>> +    spin_barrier(&hd->arch.pgtables.lock);
> 
> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as
> the barrier? Why introduce another one (with a similar comment)
> explicitly now?
The barriers act differently, one will get against any IOMMU page-tables 
modification. The other one will gate against allocation.

There is no guarantee that the former will prevent the latter.

> 
>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>       unmap_domain_page(p);
>>   
>>       spin_lock(&hd->arch.pgtables.lock);
>> -    page_list_add(pg, &hd->arch.pgtables.list);
>> +    /*
>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>> +     * nothing prevent allocation to happen afterwards. There is no valid
>> +     * reasons to continue to update the IOMMU page-tables while the
>> +     * domain is dying.
>> +     *
>> +     * So prevent page-table allocation when the domain is dying.
>> +     *
>> +     * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying.
>> +     */
>> +    if ( likely(!d->is_dying) )
>> +    {
>> +        alive = true;
>> +        page_list_add(pg, &hd->arch.pgtables.list);
>> +    }
>>       spin_unlock(&hd->arch.pgtables.lock);
>>   
>> +    if ( unlikely(!alive) )
>> +    {
>> +        free_domheap_page(pg);
>> +        pg = NULL;
>> +    }
>> +
>>       return pg;
>>   }
> 
> As before I'm concerned of this forcing error paths to be taken
> elsewhere, in case an allocation still happens (e.g. from unmap
> once super page mappings are supported). Considering some of the
> error handling in the IOMMU code is to invoke domain_crash(), it
> would be quite unfortunate if we ended up crashing a domain
> while it is being cleaned up after.

It is unfortunate, but I think this is better than having to leak page 
tables.

> 
> Additionally, the (at present still hypothetical) unmap case, if
> failing because of the change here, would then again chance to
> leave mappings in place while the underlying pages get freed. As
> this would likely require an XSA, the change doesn't feel like
> "hardening" to me.

I would agree with this if memory allocations could never fail. That's 
not that case and will become worse as we use IOMMU pool.

Do you have callers in mind that doesn't check the returns of iommu_unmap()?

Cheers,


-- 
Julien Grall


  reply	other threads:[~2021-02-17 16:30 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-17 14:24 [for-4.15][PATCH v3 0/3] xen/iommu: Collection of bug fixes for IOMMU teadorwn Julien Grall
2021-02-17 14:24 ` [for-4.15][PATCH v3 1/3] xen/iommu: x86: Clear the root page-table before freeing the page-tables Julien Grall
2021-02-17 14:54   ` Jan Beulich
2021-02-17 15:00     ` Julien Grall
2021-02-17 15:17       ` Jan Beulich
2021-02-17 16:48         ` Julien Grall
2021-02-17 14:24 ` [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying Julien Grall
2021-02-17 15:01   ` Jan Beulich
2021-02-17 16:07     ` Julien Grall
2021-02-18 13:05       ` Jan Beulich
2021-02-18 13:25         ` Julien Grall
2021-02-19  8:49           ` Jan Beulich
2021-02-19  9:24             ` Julien Grall
2021-02-18 14:00         ` Paul Durrant
2021-02-19  8:56           ` Jan Beulich
2021-02-17 14:24 ` [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU page-table allocator Julien Grall
2021-02-17 15:13   ` Jan Beulich
2021-02-17 16:29     ` Julien Grall [this message]
2021-02-18 13:10       ` Jan Beulich
2021-02-18 13:19         ` Julien Grall
2021-02-18 17:04           ` Jan Beulich
2021-02-18 17:41             ` Julien Grall
2021-02-19  8:46               ` Jan Beulich
2021-02-19  8:57                 ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51618338-daff-5b9a-5214-e0788d95992b@xen.org \
    --to=julien@xen.org \
    --cc=hongyxia@amazon.co.uk \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=jgrall@amazon.com \
    --cc=paul@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).