From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF92CC433FE for ; Mon, 25 Oct 2021 19:46:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 97AE661357 for ; Mon, 25 Oct 2021 19:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236721AbhJYTtT (ORCPT ); Mon, 25 Oct 2021 15:49:19 -0400 Received: from mail.kernel.org ([198.145.29.99]:58988 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237042AbhJYTnX (ORCPT ); Mon, 25 Oct 2021 15:43:23 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 21D8C6120A; Mon, 25 Oct 2021 19:37:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1635190657; bh=0n95ArFL/RMSTSHPHnx3KQfRkZtnCcJuLd64ucYv6w0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LVv4Y+Ycc9ywrtzuAv/cglhrsDVSpKgZU1E81GZGHLDBloi1R0DfDEHNGNK1qBvK8 5VqjDhQ5+xpaQjVzt1GDnLW9GqJ9GkLvOCuKbDoUMeaAJD3q17eUMgruJ1F48zeJEo erw2usgs0xwk/V0fxh3ryY8mCEWaPbutV7fP6TQU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Will Deacon , Quentin Perret , Marc Zyngier , Sasha Levin Subject: [PATCH 5.14 022/169] KVM: arm64: Fix host stage-2 PGD refcount Date: Mon, 25 Oct 2021 21:13:23 +0200 Message-Id: <20211025191020.533737156@linuxfoundation.org> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211025191017.756020307@linuxfoundation.org> References: <20211025191017.756020307@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Quentin Perret [ Upstream commit 1d58a17ef54599506d44c45ac95be27273a4d2b1 ] The KVM page-table library refcounts the pages of concatenated stage-2 PGDs individually. However, when running KVM in protected mode, the host's stage-2 PGD is currently managed by EL2 as a single high-order compound page, which can cause the refcount of the tail pages to reach 0 when they shouldn't, hence corrupting the page-table. Fix this by introducing a new hyp_split_page() helper in the EL2 page allocator (matching the kernel's split_page() function), and make use of it from host_s2_zalloc_pages_exact(). Fixes: 1025c8c0c6ac ("KVM: arm64: Wrap the host with a stage 2") Acked-by: Will Deacon Suggested-by: Will Deacon Signed-off-by: Quentin Perret Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20211005090155.734578-5-qperret@google.com Signed-off-by: Sasha Levin --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 1 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 13 ++++++++++++- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 ++++++++++++++ 3 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index fb0f523d1492..0a048dc06a7d 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -24,6 +24,7 @@ struct hyp_pool { /* Allocation */ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index a6ce991b1467..b79ce0059e7b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -35,7 +35,18 @@ static const u8 pkvm_hyp_id = 1; static void *host_s2_zalloc_pages_exact(size_t size) { - return hyp_alloc_pages(&host_s2_pool, get_order(size)); + void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size)); + + hyp_split_page(hyp_virt_to_page(addr)); + + /* + * The size of concatenated PGDs is always a power of two of PAGE_SIZE, + * so there should be no need to free any of the tail pages to make the + * allocation exact. + */ + WARN_ON(size != (PAGE_SIZE << get_order(size))); + + return addr; } static void *host_s2_zalloc_page(void *pool) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 41fc25bdfb34..a6e874e61a40 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -193,6 +193,20 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) hyp_spin_unlock(&pool->lock); } +void hyp_split_page(struct hyp_page *p) +{ + unsigned short order = p->order; + unsigned int i; + + p->order = 0; + for (i = 1; i < (1 << order); i++) { + struct hyp_page *tail = p + i; + + tail->order = 0; + hyp_set_page_refcounted(tail); + } +} + void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) { unsigned short i = order; -- 2.33.0