From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0243DC433E0 for ; Thu, 18 Feb 2021 17:04:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 91AB1614A7 for ; Thu, 18 Feb 2021 17:04:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91AB1614A7 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.86714.162989 (Exim 4.92) (envelope-from ) id 1lCmio-00007V-6L; Thu, 18 Feb 2021 17:04:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 86714.162989; Thu, 18 Feb 2021 17:04:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCmio-00007O-1Q; Thu, 18 Feb 2021 17:04:10 +0000 Received: by outflank-mailman (input) for mailman id 86714; Thu, 18 Feb 2021 17:04:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lCmin-00007J-4Y for xen-devel@lists.xenproject.org; Thu, 18 Feb 2021 17:04:09 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c28e1cc6-c552-494c-82f6-dcf358f2d43d; Thu, 18 Feb 2021 17:04:07 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id F36C8ACD4; Thu, 18 Feb 2021 17:04:06 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c28e1cc6-c552-494c-82f6-dcf358f2d43d X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613667847; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=djexdEIqwLwefnVmIQrAri1RtlqUfPMIhNwImDgvcV4=; b=HjE2iM9FcJy4LBkH0C81pCVZwk59Ok7E2zGyd2ZklzhSYHjq/Z+480aETttrRAVLzP5QuT ouWfqm9WY7sU99ek5FmKwyHzEMueW1sSOzWBlIK9Q9tkXE33IZn3GmYmLbyC74LhOzDUoI jnYh+3FGjqwCqCI9C+sWoNvu+sZXNKs= Subject: Re: [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU page-table allocator To: Julien Grall Cc: hongyxia@amazon.co.uk, iwj@xenproject.org, Julien Grall , Paul Durrant , xen-devel@lists.xenproject.org References: <20210217142458.3769-1-julien@xen.org> <20210217142458.3769-4-julien@xen.org> <51618338-daff-5b9a-5214-e0788d95992b@xen.org> <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org> From: Jan Beulich Message-ID: <141ea545-3725-5305-d352-057ff7c70c4f@suse.com> Date: Thu, 18 Feb 2021 18:04:06 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: <96971bbb-05ec-7df0-a8d7-931cc0b41a77@xen.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit On 18.02.2021 14:19, Julien Grall wrote: > > > On 18/02/2021 13:10, Jan Beulich wrote: >> On 17.02.2021 17:29, Julien Grall wrote: >>> On 17/02/2021 15:13, Jan Beulich wrote: >>>> On 17.02.2021 15:24, Julien Grall wrote:> --- a/xen/drivers/passthrough/x86/iommu.c> +++ b/xen/drivers/passthrough/x86/iommu.c> @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d)> > void arch_iommu_domain_destroy(struct domain *d)> {> + /*> + * There should be not page-tables left allocated by the time the >>>> Nit: s/not/no/ ? >>>> >>>>> + * domain is destroyed. Note that arch_iommu_domain_destroy() is >>>>> + * called unconditionally, so pgtables may be unitialized. >>>>> + */ >>>>> + ASSERT(dom_iommu(d)->platform_ops == NULL || >>>>> + page_list_empty(&dom_iommu(d)->arch.pgtables.list)); >>>>> } >>>>> >>>>> static bool __hwdom_init hwdom_iommu_map(const struct domain *d, >>>>> @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d) >>>>> */ >>>>> hd->platform_ops->clear_root_pgtable(d); >>>>> >>>>> + /* After this barrier no new page allocations can occur. */ >>>>> + spin_barrier(&hd->arch.pgtables.lock); >>>> >>>> Didn't patch 2 utilize the call to ->clear_root_pgtable() itself as >>>> the barrier? Why introduce another one (with a similar comment) >>>> explicitly now? >>> The barriers act differently, one will get against any IOMMU page-tables >>> modification. The other one will gate against allocation. >>> >>> There is no guarantee that the former will prevent the latter. >> >> Oh, right - different locks. I got confused here because in both >> cases the goal is to prevent allocations. >> >>>>> @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) >>>>> unmap_domain_page(p); >>>>> >>>>> spin_lock(&hd->arch.pgtables.lock); >>>>> - page_list_add(pg, &hd->arch.pgtables.list); >>>>> + /* >>>>> + * The IOMMU page-tables are freed when relinquishing the domain, but >>>>> + * nothing prevent allocation to happen afterwards. There is no valid >>>>> + * reasons to continue to update the IOMMU page-tables while the >>>>> + * domain is dying. >>>>> + * >>>>> + * So prevent page-table allocation when the domain is dying. >>>>> + * >>>>> + * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying. >>>>> + */ >>>>> + if ( likely(!d->is_dying) ) >>>>> + { >>>>> + alive = true; >>>>> + page_list_add(pg, &hd->arch.pgtables.list); >>>>> + } >>>>> spin_unlock(&hd->arch.pgtables.lock); >>>>> >>>>> + if ( unlikely(!alive) ) >>>>> + { >>>>> + free_domheap_page(pg); >>>>> + pg = NULL; >>>>> + } >>>>> + >>>>> return pg; >>>>> } >>>> >>>> As before I'm concerned of this forcing error paths to be taken >>>> elsewhere, in case an allocation still happens (e.g. from unmap >>>> once super page mappings are supported). Considering some of the >>>> error handling in the IOMMU code is to invoke domain_crash(), it >>>> would be quite unfortunate if we ended up crashing a domain >>>> while it is being cleaned up after. >>> >>> It is unfortunate, but I think this is better than having to leak page >>> tables. >>> >>>> >>>> Additionally, the (at present still hypothetical) unmap case, if >>>> failing because of the change here, would then again chance to >>>> leave mappings in place while the underlying pages get freed. As >>>> this would likely require an XSA, the change doesn't feel like >>>> "hardening" to me. >>> >>> I would agree with this if memory allocations could never fail. That's >>> not that case and will become worse as we use IOMMU pool. >>> >>> Do you have callers in mind that doesn't check the returns of iommu_unmap()? >> >> The function is marked __must_check, so there won't be any direct >> callers ignoring errors (albeit I may be wrong here - we used to >> have cases where we simply suppressed the resulting compiler >> diagnostic, without really handling errors; not sure if all of >> these are gone by now). Risks might be elsewhere. > > But this is not a new risk. So I don't understand why you think my patch > is the one that may lead to an XSA in the future. I didn't mean to imply it would _lead_ to an XSA (you're right that the problem was there already before), but the term "harden" suggests to me that the patch aims at eliminating possible conditions. IOW the result here looks to me as if it would yield a false sense of safety. Jan