From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 97ECD23A1; Wed, 1 Mar 2023 11:55:42 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 28F652F4; Wed, 1 Mar 2023 03:56:25 -0800 (PST) Received: from [10.57.16.41] (unknown [10.57.16.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A8A43F587; Wed, 1 Mar 2023 03:55:39 -0800 (PST) Message-ID: <5750eead-44f9-260f-283d-4902b5363faf@arm.com> Date: Wed, 1 Mar 2023 11:55:37 +0000 Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 Subject: Re: [RFC PATCH 08/28] arm64: RME: Keep a spare page delegated to the RMM Content-Language: en-GB To: Zhi Wang Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> <20230127112932.38045-9-steven.price@arm.com> <20230213184701.00005d3b@gmail.com> From: Steven Price In-Reply-To: <20230213184701.00005d3b@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 13/02/2023 16:47, Zhi Wang wrote: > On Fri, 27 Jan 2023 11:29:12 +0000 > Steven Price wrote: > >> Pages can only be populated/destroyed on the RMM at the 4KB granule, >> this requires creating the full depth of RTTs. However if the pages are >> going to be combined into a 4MB huge page the last RTT is only >> temporarily needed. Similarly when freeing memory the huge page must be >> temporarily split requiring temporary usage of the full depth oF RTTs. >> >> To avoid needing to perform a temporary allocation and delegation of a >> page for this purpose we keep a spare delegated page around. In >> particular this avoids the need for memory allocation while destroying >> the realm guest. >> >> Signed-off-by: Steven Price >> --- >> arch/arm64/include/asm/kvm_rme.h | 3 +++ >> arch/arm64/kvm/rme.c | 6 ++++++ >> 2 files changed, 9 insertions(+) >> >> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h >> index 055a22accc08..a6318af3ed11 100644 >> --- a/arch/arm64/include/asm/kvm_rme.h >> +++ b/arch/arm64/include/asm/kvm_rme.h >> @@ -21,6 +21,9 @@ struct realm { >> void *rd; >> struct realm_params *params; >> >> + /* A spare already delegated page */ >> + phys_addr_t spare_page; >> + >> unsigned long num_aux; >> unsigned int vmid; >> unsigned int ia_bits; >> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c >> index 9f8c5a91b8fc..0c9d70e4d9e6 100644 >> --- a/arch/arm64/kvm/rme.c >> +++ b/arch/arm64/kvm/rme.c >> @@ -148,6 +148,7 @@ static int realm_create_rd(struct kvm *kvm) >> } >> >> realm->rd = rd; >> + realm->spare_page = PHYS_ADDR_MAX; >> realm->ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); >> >> if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) { >> @@ -357,6 +358,11 @@ void kvm_destroy_realm(struct kvm *kvm) >> free_page((unsigned long)realm->rd); >> realm->rd = NULL; >> } >> + if (realm->spare_page != PHYS_ADDR_MAX) { >> + if (!WARN_ON(rmi_granule_undelegate(realm->spare_page))) >> + free_page((unsigned long)phys_to_virt(realm->spare_page)); > > Will the page be leaked (not usable for host and realms) if the undelegate > failed? If yes, better at least put a comment. Yes - I'll add a comment. In general being unable to undelegate a page points to a programming error in the host. The only reason the RMM should refuse the request is it the page is in use by a Realm which the host has configured. So the WARN() is correct (there's a kernel bug) and the only sensible course of action is to leak the page and limp on. Thanks, Steve >> + realm->spare_page = PHYS_ADDR_MAX; >> + } >> >> pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level); >> for (i = 0; i < pgd_sz; i++) { > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A005BC64ED6 for ; Wed, 1 Mar 2023 11:56:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3IEAtOJBkpHjn9mgSJhgkLZBHqEoz6mfwAajIMwus/8=; b=fnt8UDqz/uvTe0 Rlv630YfGWqZaT0S38jLVzALXtekN6knt/sGumm8hkXtpn6gz9x6CWndP67xeGlEzCNGY3UKckoke 2n9cm2piyDsaVE0onpVzzI3po3lDajFJXTtWwNJXhLyPsFqR3OTh7EPAZ30yKE5e3ChosgzWG1Jqu 0/F/bDlIZv/m/n27Y74ODaU97iLAQEFWswCNC5+qJZDnb5EEAO7H9Nbi10MsCnhpiAdL1RRsjiSuj YoBygWgF32L9BkelqkMUtbUt/YD7I11dDtpzUmpp0VO3WGvMgXsBRFLBlPhjA/vJ7UQxb5eN/R2e5 d9ODyssQQ7f1UTgxbJOw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pXL3i-00G19V-Kw; Wed, 01 Mar 2023 11:55:46 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pXL3e-00G17G-Lj for linux-arm-kernel@lists.infradead.org; Wed, 01 Mar 2023 11:55:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 28F652F4; Wed, 1 Mar 2023 03:56:25 -0800 (PST) Received: from [10.57.16.41] (unknown [10.57.16.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A8A43F587; Wed, 1 Mar 2023 03:55:39 -0800 (PST) Message-ID: <5750eead-44f9-260f-283d-4902b5363faf@arm.com> Date: Wed, 1 Mar 2023 11:55:37 +0000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 Subject: Re: [RFC PATCH 08/28] arm64: RME: Keep a spare page delegated to the RMM Content-Language: en-GB To: Zhi Wang Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> <20230127112932.38045-9-steven.price@arm.com> <20230213184701.00005d3b@gmail.com> From: Steven Price In-Reply-To: <20230213184701.00005d3b@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230301_035542_839374_F2E2F395 X-CRM114-Status: GOOD ( 21.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 13/02/2023 16:47, Zhi Wang wrote: > On Fri, 27 Jan 2023 11:29:12 +0000 > Steven Price wrote: > >> Pages can only be populated/destroyed on the RMM at the 4KB granule, >> this requires creating the full depth of RTTs. However if the pages are >> going to be combined into a 4MB huge page the last RTT is only >> temporarily needed. Similarly when freeing memory the huge page must be >> temporarily split requiring temporary usage of the full depth oF RTTs. >> >> To avoid needing to perform a temporary allocation and delegation of a >> page for this purpose we keep a spare delegated page around. In >> particular this avoids the need for memory allocation while destroying >> the realm guest. >> >> Signed-off-by: Steven Price >> --- >> arch/arm64/include/asm/kvm_rme.h | 3 +++ >> arch/arm64/kvm/rme.c | 6 ++++++ >> 2 files changed, 9 insertions(+) >> >> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h >> index 055a22accc08..a6318af3ed11 100644 >> --- a/arch/arm64/include/asm/kvm_rme.h >> +++ b/arch/arm64/include/asm/kvm_rme.h >> @@ -21,6 +21,9 @@ struct realm { >> void *rd; >> struct realm_params *params; >> >> + /* A spare already delegated page */ >> + phys_addr_t spare_page; >> + >> unsigned long num_aux; >> unsigned int vmid; >> unsigned int ia_bits; >> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c >> index 9f8c5a91b8fc..0c9d70e4d9e6 100644 >> --- a/arch/arm64/kvm/rme.c >> +++ b/arch/arm64/kvm/rme.c >> @@ -148,6 +148,7 @@ static int realm_create_rd(struct kvm *kvm) >> } >> >> realm->rd = rd; >> + realm->spare_page = PHYS_ADDR_MAX; >> realm->ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); >> >> if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) { >> @@ -357,6 +358,11 @@ void kvm_destroy_realm(struct kvm *kvm) >> free_page((unsigned long)realm->rd); >> realm->rd = NULL; >> } >> + if (realm->spare_page != PHYS_ADDR_MAX) { >> + if (!WARN_ON(rmi_granule_undelegate(realm->spare_page))) >> + free_page((unsigned long)phys_to_virt(realm->spare_page)); > > Will the page be leaked (not usable for host and realms) if the undelegate > failed? If yes, better at least put a comment. Yes - I'll add a comment. In general being unable to undelegate a page points to a programming error in the host. The only reason the RMM should refuse the request is it the page is in use by a Realm which the host has configured. So the WARN() is correct (there's a kernel bug) and the only sensible course of action is to leak the page and limp on. Thanks, Steve >> + realm->spare_page = PHYS_ADDR_MAX; >> + } >> >> pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level); >> for (i = 0; i < pgd_sz; i++) { > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel