From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_2 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9D3FC433DF for ; Mon, 27 Jul 2020 09:10:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8580920714 for ; Mon, 27 Jul 2020 09:10:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=xen.org header.i=@xen.org header.b="QB13JIGK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8580920714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jzz8e-0003Xv-W1; Mon, 27 Jul 2020 09:09:40 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jzz8d-0003Xq-Lu for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:09:39 +0000 X-Inumbo-ID: e5fa0d86-cfe8-11ea-a776-12813bfff9fa Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id e5fa0d86-cfe8-11ea-a776-12813bfff9fa; Mon, 27 Jul 2020 09:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type: References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=kCuh2KFv5piuBc1L+35MIJLc0EbntaZ4unEdO6wadUs=; b=QB13JIGKx7FKaQH7I1nu7ATXzP ftlrz2VnnSn5LBDXWbTzAzcLvyYmNUkCbb3/W4Wcf7LX8GHl3RbgsQWX8aH929sC9bGadWrohIwsV QtuV+aC4J47jAcPfIo+orCI6SbXvzAV7i4LDDw9/RPZrsNcm/Ly2s6jEKMCWbXAy+QxM=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jzz8a-0003B8-Vm; Mon, 27 Jul 2020 09:09:36 +0000 Received: from 54-240-197-233.amazon.com ([54.240.197.233] helo=edge-cache-102.e-fra50.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1jzz8a-0001mB-If; Mon, 27 Jul 2020 09:09:36 +0000 Message-ID: Subject: Re: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e From: Hongyan Xia To: Jan Beulich Date: Mon, 27 Jul 2020 10:09:33 +0100 In-Reply-To: <826d5a28-c391-dd30-d588-6f730b454c18@suse.com> References: <826d5a28-c391-dd30-d588-6f730b454c18@suse.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , julien@xen.org, Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , xen-devel@lists.xenproject.org, Roger Pau =?ISO-8859-1?Q?Monn=E9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On Tue, 2020-07-14 at 12:47 +0200, Jan Beulich wrote: > On 29.05.2020 13:11, Hongyan Xia wrote: > > From: Wei Liu > > > > Rewrite those functions to use the new APIs. Modify its callers to > > unmap > > the pointer returned. Since alloc_xen_pagetable_new() is almost > > never > > useful unless accompanied by page clearing and a mapping, introduce > > a > > helper alloc_map_clear_xen_pt() for this sequence. > > > > Note that the change of virt_to_xen_l1e() also requires > > vmap_to_mfn() to > > unmap the page, which requires domain_page.h header in vmap. > > > > Signed-off-by: Wei Liu > > Signed-off-by: Hongyan Xia > > Reviewed-by: Jan Beulich > with two further small adjustments: > > > --- a/xen/arch/x86/mm.c > > +++ b/xen/arch/x86/mm.c > > @@ -4948,8 +4948,28 @@ void free_xen_pagetable_new(mfn_t mfn) > > free_xenheap_page(mfn_to_virt(mfn_x(mfn))); > > } > > > > +void *alloc_map_clear_xen_pt(mfn_t *pmfn) > > +{ > > + mfn_t mfn = alloc_xen_pagetable_new(); > > + void *ret; > > + > > + if ( mfn_eq(mfn, INVALID_MFN) ) > > + return NULL; > > + > > + if ( pmfn ) > > + *pmfn = mfn; > > + ret = map_domain_page(mfn); > > + clear_page(ret); > > + > > + return ret; > > +} > > + > > static DEFINE_SPINLOCK(map_pgdir_lock); > > > > +/* > > + * For virt_to_xen_lXe() functions, they take a virtual address > > and return a > > + * pointer to Xen's LX entry. Caller needs to unmap the pointer. > > + */ > > static l3_pgentry_t *virt_to_xen_l3e(unsigned long v) > > May I suggest s/virtual/linear/ to at least make the new comment > correct? > > > --- a/xen/include/asm-x86/page.h > > +++ b/xen/include/asm-x86/page.h > > @@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *); > > #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn) > > #define paddr_to_pfn(pa) __paddr_to_pfn(pa) > > #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa)) > > -#define > > vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned > > long)(va)))) > > + > > +#define vmap_to_mfn(va) > > ({ \ > > + const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned > > long)(va)); \ > > + mfn_t mfn_ = > > l1e_get_mfn(*pl1e_); \ > > + unmap_domain_page(pl1e_); > > \ > > + mfn_; }) > > Just like is already the case in domain_page_map_to_mfn() I think > you want to add "BUG_ON(!pl1e)" here to limit the impact of any > problem to DoS (rather than a possible privilege escalation). > > Or actually, considering the only case where virt_to_xen_l1e() > would return NULL, returning INVALID_MFN here would likely be > even more robust. There looks to be just a single caller, which > would need adjusting to cope with an error coming back. In fact - > it already ASSERT()s, despite NULL right now never coming back > from vmap_to_page(). I think the loop there would better be > > for ( i = 0; i < pages; i++ ) > { > struct page_info *page = vmap_to_page(va + i * PAGE_SIZE); > > if ( page ) > page_list_add(page, &pg_list); > else > printk_once(...); > } > > Thoughts? To be honest, I think the current implementation of vmap_to_mfn() is just incorrect. There is simply no guarantee that a vmap is mapped with small pages, so IMO we just cannot do virt_to_xen_x1e() here. The correct way is to have a generic page table walking function which walks from the base and can stop at any level, and properly return code to indicate level or any error. I am inclined to BUG_ON() here, and upstream a proper fix later to vmap_to_mfn() as an individual patch. Am I missing anything here? Hongyan