From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 231537E for ; Sat, 13 Aug 2022 16:05:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E346AC433D7; Sat, 13 Aug 2022 16:05:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660406722; bh=3g9RQwus4TjJnSLEmDO7GO8w4lhwGY5MWBNfeGUrI14=; h=In-Reply-To:References:Date:From:To:Cc:Subject:From; b=W9kybO589bVAfO/ZnsMgs4mwe1pNcnCvm8SPmje6pKRcHyn8kr5CjFZ19P19Hxj/l Av6jZqd7bjaifI17Er5C2xGOVOOkya7Y5MCbNBppA94EnuoZwIkV31tkPlDTvhzfXv Tpp8UJqdJhSlVNI9hrbRhvOAodDZyndpwcRTD1/6SIyPSAm214WeC60MqYvmtHzmyc yZvM4fGo/OTDpaz6FJiAyLF5+LbA+ShCCQsCqg/H80JitRxer84fR4vLstIvE/yJS1 mxpWKi3o2l+H6R78vVkvl8frP9DauNd7hgZg0NsISKQyTu4qB+2JtfxXAOUetSfeOS RBvODzM/650Gg== Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailauth.nyi.internal (Postfix) with ESMTP id CB83A27C005A; Sat, 13 Aug 2022 12:05:20 -0400 (EDT) Received: from imap48 ([10.202.2.98]) by compute2.internal (MEProxy); Sat, 13 Aug 2022 12:05:20 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdegkedgleelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepofgfggfkjghffffhvfevufgtgfesthhqredtreerjeenucfhrhhomhepfdet nhguhicunfhuthhomhhirhhskhhifdcuoehluhhtoheskhgvrhhnvghlrdhorhhgqeenuc ggtffrrghtthgvrhhnpeduveffvdegvdefhfegjeejlefgtdffueekudfgkeduvdetvddu ieeluefgjeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpegrnhguhidomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqudduiedu keehieefvddqvdeifeduieeitdekqdhluhhtoheppehkvghrnhgvlhdrohhrgheslhhinh hugidrlhhuthhordhush X-ME-Proxy: Feedback-ID: ieff94742:Fastmail Received: by mailuser.nyi.internal (Postfix, from userid 501) id 1B36831A0063; Sat, 13 Aug 2022 12:05:20 -0400 (EDT) X-Mailer: MessagingEngine.com Webmail Interface User-Agent: Cyrus-JMAP/3.7.0-alpha0-841-g7899e99a45-fm-20220811.002-g7899e99a Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Message-Id: In-Reply-To: <073c5a97-272c-c5a0-19f2-c3f14f916c72@intel.com> References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> <20220614120231.48165-11-kirill.shutemov@linux.intel.com> <80cc204b-a24f-684f-ec66-1361b69cae39@intel.com> <073c5a97-272c-c5a0-19f2-c3f14f916c72@intel.com> Date: Sat, 13 Aug 2022 09:04:58 -0700 From: "Andy Lutomirski" To: "Dave Hansen" , "Borislav Petkov" , "Kirill A. Shutemov" Cc: "Sean Christopherson" , "Andrew Morton" , "Joerg Roedel" , "Ard Biesheuvel" , "Andi Kleen" , "Sathyanarayanan Kuppuswamy" , "David Rientjes" , "Vlastimil Babka" , "Tom Lendacky" , "Thomas Gleixner" , "Peter Zijlstra (Intel)" , "Paolo Bonzini" , "Ingo Molnar" , "Varad Gautam" , "Dario Faggioli" , "Mike Rapoport" , "David Hildenbrand" , "Marcelo Henrique Cerri" , tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, "the arch/x86 maintainers" , linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, "Linux Kernel Mailing List" Subject: Re: [PATCHv7 10/14] x86/mm: Avoid load_unaligned_zeropad() stepping into unaccepted memory Content-Type: text/plain;charset=utf-8 Content-Transfer-Encoding: quoted-printable On Wed, Aug 3, 2022, at 7:02 AM, Dave Hansen wrote: > On 8/2/22 16:46, Dave Hansen wrote: >> To sum it all up, I'm not happy with the complexity of the page >> acceptance code either but I'm not sure that it's bad tradeoff compar= ed >> to greater #VE complexity or fragility. >>=20 >> Does anyone think we should go back and really reconsider this? > > One other thing I remembered as I re-read my write up on this. > > In the "new" mode, guests never get #VE's for unaccepted memory. They > just exit to the host and can never be reentered. They must be killed. > > In the "old" mode, I _believe_ that the guest always gets a #VE for > non-EPT-present memory. The #VE is basically the same no matter if the > page is unaccepted or if the host goes out and makes a > previously-accepted page non-present. > > One really nasty implication of this "old" mode is that the host can > remove *accepted* pages that are used in the syscall gap. That means > that the #VE handler would need to be of the paranoid variety which > opens up all kinds of other fun. > > * "Old" - #VE's can happen in the syscall gap > * "New" - #VE's happen at better-defined times. Unexpected ones are > fatal. > > There's a third option which I proposed but doesn't yet exist. The TDX > module _could_ separate the behavior of unaccepted memory #VE's and > host-induced #VEs. This way, we could use load_unaligned_zeropad() wi= th > impunity and handle it in the #VE handler. At the same time, the host > would not be allowed to remove accepted memory and cause problems in t= he > syscall gap. Kinda the best of both worlds. > > But, I'm not sure how valuable that would be now that we have the > (admittedly squirrelly) code to avoid load_unaligned_zeropad() #VE's. How would that be implemented? It would need to track which GPAs *were*= accepted across a host-induced unmap/remap cycle. This would involve pr= eventing the host from ever completely removing a secure EPT table witho= ut the guest=E2=80=99s help, right? Admittedly this would IMO be better behavior. Is it practical to impleme= nt?