From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A51BC433EF for ; Mon, 7 Mar 2022 18:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244849AbiCGSxZ (ORCPT ); Mon, 7 Mar 2022 13:53:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236740AbiCGSxX (ORCPT ); Mon, 7 Mar 2022 13:53:23 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75C3625E0 for ; Mon, 7 Mar 2022 10:52:27 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id o133-20020a25738b000000b0062872621d0eso14267688ybc.2 for ; Mon, 07 Mar 2022 10:52:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=yZ2vExUCvgYgPgkkDALK76cPa7bHwbf59G/oTB4pBR8=; b=tk+eKcAIWUtxuJV96Cey1nH9mhAtDG3PMwLZ3ckk+KogZwGYwmT7/dpZ8WnXGuL22u eADem4jTAgtnEQ129ENQmGTGNMXyKBN+bsp+MtvSf5I41fA109y++CcPOiJEFM3zFYWW P7bPc45ODC1vAfwl+Nxb9vwBiXAcNaXbmU/GrpoQfc8qNppvw8Lhpe1GrqodiDHJ8NrQ Fe/YdllV09MlZX9bHbyenmKS2gO5nUHPm9ZEBRXneJWXheKDUVEqxYLFDY2buftxXBeV KP9/c/iZQS1OxPxidZOLCQSpImk2efRt6A53QpPKxdVSzthfCNH7onHJj2gFd3lv96GX 0pGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=yZ2vExUCvgYgPgkkDALK76cPa7bHwbf59G/oTB4pBR8=; b=UQOqCUqIyoD/t8vx1GdbRrNF9MaJMZciddaP9RD6mhzXRcEUjzf2/mH0OM41Hw79+b T7Y3+P3vSHxv/VNKvDSxOKfMryvDPReUfkQT8jNxrhTTmEAcB0pXZ3BliU471Ne++vXZ noo95h4eBOAJyy68GaE7/E4E+K5+kcU3tsq1VI7uNoVhWv3+oJrAX6HUw8GEaSACaCs5 kY+QvsezTyLuI43ub71fK4IEF+32KNkSA72HuAww2jGmbmn3BNo377gM6EKYDXdTlpfJ eAfXtKQBMD4sTTVGTxUAofw7g8jR46i8WhlT9Y0d6FkQwypJG7jcoVunFMusUfy3MdWe aKmw== X-Gm-Message-State: AOAM530f8ajA4y10uiTcZwqZG+HULx23vCeEm0i8Hi7mFR/hCxJcEOw5 0rdWg6n6IThYhR2LHjvF824/mFp5BLx8vEmgOw== X-Google-Smtp-Source: ABdhPJza/ySL4L3/K0sQ2wfta66Q9QT4UcSS3GTQhbvCuTd7cyJyumYfPDOSUIOj66wx+X5jOFzu3bWVNhegfDMp2g== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a81:83cd:0:b0:2dc:2dc6:d695 with SMTP id t196-20020a8183cd000000b002dc2dc6d695mr9931415ywf.167.1646679146640; Mon, 07 Mar 2022 10:52:26 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:03 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 5/8] KVM: arm64: Detect and handle hypervisor stack overflows From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Stephen Boyd , Andrew Scull , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The hypervisor stacks (for both nVHE Hyp mode and nVHE protected mode) are aligned such that any valid stack address has PAGE_SHIFT bit as 1. This allows us to conveniently check for overflow in the exception entry without corrupting any GPRs. We won't recover from a stack overflow so panic the hypervisor. Signed-off-by: Kalesh Singh --- Changes in v5: - Valid stack addresses now have PAGE_SHIFT bit as 1 instead of 0 Changes in v3: - Remove test_sp_overflow macro, per Mark - Add asmlinkage attribute for hyp_panic, hyp_panic_bad_stack, per Ard arch/arm64/kvm/hyp/nvhe/host.S | 24 ++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++++++- 2 files changed, 30 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 3d613e721a75..be6d844279b1 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -153,6 +153,18 @@ SYM_FUNC_END(__host_hvc) .macro invalid_host_el2_vect .align 7 + + /* + * Test whether the SP has overflowed, without corrupting a GPR. + * nVHE hypervisor stacks are aligned so that the PAGE_SHIFT bit + * of SP should always be 1. + */ + add sp, sp, x0 // sp' = sp + x0 + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp + tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@ + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp + /* If a guest is loaded, panic out of it. */ stp x0, x1, [sp, #-16]! get_loaded_vcpu x0, x1 @@ -165,6 +177,18 @@ SYM_FUNC_END(__host_hvc) * been partially clobbered by __host_enter. */ b hyp_panic + +.L__hyp_sp_overflow\@: + /* + * Reset SP to the top of the stack, to allow handling the hyp_panic. + * This corrupts the stack but is ok, since we won't be attempting + * any unwinding here. + */ + ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 + mov sp, x0 + + bl hyp_panic_bad_stack + ASM_BUG() .endm .macro invalid_host_el1_vect diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6410d21d8695..703a5d3f611b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -347,7 +347,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } -void __noreturn hyp_panic(void) +asmlinkage void __noreturn hyp_panic(void) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); @@ -369,6 +369,11 @@ void __noreturn hyp_panic(void) unreachable(); } +asmlinkage void __noreturn hyp_panic_bad_stack(void) +{ + hyp_panic(); +} + asmlinkage void kvm_unexpected_el2_exception(void) { return __kvm_unexpected_el2_exception(); -- 2.35.1.616.g0bdcbb4464-goog