From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 02BAC5B696; Fri, 12 Apr 2024 08:43:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911404; cv=none; b=nzWybSdrTAIH9qc+7sXrkRLK+ybV5XmgQe9bqvA+pj2pYW6kI42EOzu94Kcn7TPqnJ51nMKs2PUZ6BGDX5MKuWJ9qe4zShHEb+fEbCHJ/FUFwf/LNYleKMXp9UbJN2x2VbC0vTdl4PXCw3+n+ztU9ZVXleksK2kAJ7lOCBOWjKc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911404; c=relaxed/simple; bh=2fpv4dR22mruquuEbMUbNL8WYCw3sVfPbgr+kyv1PnM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KXdmb1qOyCHDKiT5VksQPzUHb82GxojealrKL9ELoAmHLMP+yJvEU1pkV+a0YBYDPYUJn36SRs6itxfOSgVF/HnK5KGH+9+W6+clbNwsKG+eYzUOD00H1Y/PXXM3Ct71i8MgL8fSvpYFGaPxSsDJSyQlv8PhNTuX+wGTN7HBn5o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E904B14BF; Fri, 12 Apr 2024 01:43:51 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8D2723F6C4; Fri, 12 Apr 2024 01:43:20 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Steven Price Subject: [PATCH v2 02/43] kvm: arm64: pgtable: Track the number of pages in the entry level Date: Fri, 12 Apr 2024 09:42:28 +0100 Message-Id: <20240412084309.1733783-3-steven.price@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240412084309.1733783-1-steven.price@arm.com> References: <20240412084056.1733704-1-steven.price@arm.com> <20240412084309.1733783-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Suzuki K Poulose Keep track of the number of pages allocated for the top level PGD, rather than computing it everytime (though we need it only twice now). This will be used later by Arm CCA KVM changes. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- arch/arm64/include/asm/kvm_pgtable.h | 2 ++ arch/arm64/kvm/hyp/pgtable.c | 5 +++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 19278dfe7978..0350c08ada7a 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -362,6 +362,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) * struct kvm_pgtable - KVM page-table. * @ia_bits: Maximum input address size, in bits. * @start_level: Level at which the page-table walk starts. + * @pgd_pages: Number of pages in the entry level of the page-table. * @pgd: Pointer to the first top-level entry of the page-table. * @mm_ops: Memory management callbacks. * @mmu: Stage-2 KVM MMU struct. Unused for stage-1 page-tables. @@ -372,6 +373,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) struct kvm_pgtable { u32 ia_bits; s8 start_level; + u8 pgd_pages; kvm_pteref_t pgd; struct kvm_pgtable_mm_ops *mm_ops; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3fae5830f8d2..9decff9736ac 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1552,7 +1552,8 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); s8 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0; - pgd_sz = kvm_pgd_pages(ia_bits, start_level) * PAGE_SIZE; + pgt->pgd_pages = kvm_pgd_pages(ia_bits, start_level); + pgd_sz = pgt->pgd_pages * PAGE_SIZE; pgt->pgd = (kvm_pteref_t)mm_ops->zalloc_pages_exact(pgd_sz); if (!pgt->pgd) return -ENOMEM; @@ -1604,7 +1605,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); - pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level) * PAGE_SIZE; + pgd_sz = pgt->pgd_pages * PAGE_SIZE; pgt->mm_ops->free_pages_exact(kvm_dereference_pteref(&walker, pgt->pgd), pgd_sz); pgt->pgd = NULL; } -- 2.34.1