From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03F57C433EF for ; Wed, 20 Apr 2022 21:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382689AbiDTVyS (ORCPT ); Wed, 20 Apr 2022 17:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357179AbiDTVyN (ORCPT ); Wed, 20 Apr 2022 17:54:13 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D6EA403C5 for ; Wed, 20 Apr 2022 14:51:25 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id n40-20020a05600c3ba800b0038ff1939b16so2134692wms.2 for ; Wed, 20 Apr 2022 14:51:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rP5OWtrk4WktWLxKaOlM9djXhcx1PFOBsoG0vk8ejJ4=; b=IsIFYApUU0Ui4KekTPmRGr3ZeeQ6LFZW58iPNtRFHIdkWqkpazH/3PBI2H9j4xODGo yeto18wxSYb5ugdm7IdeP8jGmcSvKcDVJOTBELz870GpPOJ666CGkJPy9sLxDaTuktFY xCOfOimswxqF5K+eZrWyGXah1s826WKqxyZYZz3OTINFe6EXkkOT4pI9iY5ibMzZwYpx E6ZrPEYCqUQ1KA6ovQQ3fY4PezQSEm+xmse+M+WWD8f2I3WluGBCMLVOjcQSvf1t+81y T3d+QGfCtHfgvB9+vgeUCtciDXP1XcuJFvSPL4ojn+03j9LWdglXlCOIoD8C1frT0KiH j5cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rP5OWtrk4WktWLxKaOlM9djXhcx1PFOBsoG0vk8ejJ4=; b=LJndifcc0lRum5+f891JN+Ntdyaum0iu/iSxfsuSucVgENybxuVH5TcfVUAk3CzPFU oYEGSLABuENWoChJlSYDVkGXSDIVZQZB3Aw3bl/HVqBWPxslyS0bGGg4Yh4CsibI17vd WaB5aNPvf12twcxIpxoG+UkcIfR+idJATNiRP3ebyA9c5mnVTNdNOBV3RfdIi7No/JLa XF9aL31/m0/31ZpvP4A3XioyKH/DSnh1qTNesa70x+hU/khKeHiT9wZG3CESVQWhCfzh UU0w5b1MiJQwqV6aVaDOOy42BX39C8I1pplSA4lTxkg4+GyXYR0ITs/wjW6jOSXFVRyP r0jg== X-Gm-Message-State: AOAM5335Pevv8mEemsaVZrmXlbwmSbcIr2U6fnz5otRdwn2/QxGY6vqG JweKjsNlkhBWaHesvqEu6s25aE1UdV02oHx8C/Nqcw== X-Google-Smtp-Source: ABdhPJwPjeINIU8CHmBNAe17pDAu6Ngpj5ab0xFuVeaOMxuTV2W3WzInKm3hX/Q/Wnmh0WyECF668faVm90Xu6UAEfc= X-Received: by 2002:a05:600c:4f87:b0:392:9236:3c73 with SMTP id n7-20020a05600c4f8700b0039292363c73mr5637940wmq.158.1650491483628; Wed, 20 Apr 2022 14:51:23 -0700 (PDT) MIME-Version: 1.0 References: <20220408200349.1529080-1-kaleshsingh@google.com> <20220408200349.1529080-6-kaleshsingh@google.com> <87v8v6aek0.wl-maz@kernel.org> In-Reply-To: From: Kalesh Singh Date: Wed, 20 Apr 2022 14:51:12 -0700 Message-ID: Subject: Re: [PATCH v7 5/6] KVM: arm64: Detect and handle hypervisor stack overflows To: Marc Zyngier Cc: Will Deacon , Quentin Perret , Fuad Tabba , Suren Baghdasaryan , "Cc: Android Kernel" , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Andrew Walbran , Mark Rutland , Ard Biesheuvel , Zenghui Yu , Andrew Jones , Nathan Chancellor , Masahiro Yamada , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , kvmarm , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 18, 2022 at 7:41 PM Kalesh Singh wrote: > > On Mon, Apr 18, 2022 at 3:09 AM Marc Zyngier wrote: > > > > On Fri, 08 Apr 2022 21:03:28 +0100, > > Kalesh Singh wrote: > > > > > > The hypervisor stacks (for both nVHE Hyp mode and nVHE protected mode) > > > are aligned such that any valid stack address has PAGE_SHIFT bit as 1. > > > This allows us to conveniently check for overflow in the exception entry > > > without corrupting any GPRs. We won't recover from a stack overflow so > > > panic the hypervisor. > > > > > > Signed-off-by: Kalesh Singh > > > Tested-by: Fuad Tabba > > > Reviewed-by: Fuad Tabba > > > --- > > > > > > Changes in v7: > > > - Add Fuad's Reviewed-by and Tested-by tags. > > > > > > Changes in v5: > > > - Valid stack addresses now have PAGE_SHIFT bit as 1 instead of 0 > > > > > > Changes in v3: > > > - Remove test_sp_overflow macro, per Mark > > > - Add asmlinkage attribute for hyp_panic, hyp_panic_bad_stack, per Ard > > > > > > > > > arch/arm64/kvm/hyp/nvhe/host.S | 24 ++++++++++++++++++++++++ > > > arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++++++- > > > 2 files changed, 30 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S > > > index 3d613e721a75..be6d844279b1 100644 > > > --- a/arch/arm64/kvm/hyp/nvhe/host.S > > > +++ b/arch/arm64/kvm/hyp/nvhe/host.S > > > @@ -153,6 +153,18 @@ SYM_FUNC_END(__host_hvc) > > > > > > .macro invalid_host_el2_vect > > > .align 7 > > > + > > > + /* > > > + * Test whether the SP has overflowed, without corrupting a GPR. > > > + * nVHE hypervisor stacks are aligned so that the PAGE_SHIFT bit > > > + * of SP should always be 1. > > > + */ > > > + add sp, sp, x0 // sp' = sp + x0 > > > + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp > > > + tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@ > > > + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 > > > + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp > > > + > > > /* If a guest is loaded, panic out of it. */ > > > stp x0, x1, [sp, #-16]! > > > get_loaded_vcpu x0, x1 > > > @@ -165,6 +177,18 @@ SYM_FUNC_END(__host_hvc) > > > * been partially clobbered by __host_enter. > > > */ > > > b hyp_panic > > > + > > > +.L__hyp_sp_overflow\@: > > > + /* > > > + * Reset SP to the top of the stack, to allow handling the hyp_panic. > > > + * This corrupts the stack but is ok, since we won't be attempting > > > + * any unwinding here. > > > + */ > > > + ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 > > > + mov sp, x0 > > > + > > > + bl hyp_panic_bad_stack > > > > Why bl? You clearly don't expect to return here, given that you have > > an ASM_BUG() right below, and that you are calling a __no_return > > function. I think we should be consistent with the rest of the code > > and just do a simple branch. > > The idea was to use bl to give the hyp_panic_bad_stack() frame in the > stack trace, which makes it easy to identify overflows. I can add a > comment and drop the redundant ASM_BUG() Sorry, my mistake here: bl will give us the current frame in the stack trace (hyp_host_vector) so it doesn't affect hyp_panic_bad_stack (next frame) being in the strace trace. Addressed in v8: https://lore.kernel.org/r/20220420214317.3303360-6-kaleshsingh@google.com/ Thanks, Kalesh > > Thanks, > Kalesh > > > > > It also gives us a chance to preserve an extra register from the > > context. > > > > > + ASM_BUG() > > > .endm > > > > > > .macro invalid_host_el1_vect > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > > index 6410d21d8695..703a5d3f611b 100644 > > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > > @@ -347,7 +347,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > > return exit_code; > > > } > > > > > > -void __noreturn hyp_panic(void) > > > +asmlinkage void __noreturn hyp_panic(void) > > > { > > > u64 spsr = read_sysreg_el2(SYS_SPSR); > > > u64 elr = read_sysreg_el2(SYS_ELR); > > > @@ -369,6 +369,11 @@ void __noreturn hyp_panic(void) > > > unreachable(); > > > } > > > > > > +asmlinkage void __noreturn hyp_panic_bad_stack(void) > > > +{ > > > + hyp_panic(); > > > +} > > > + > > > asmlinkage void kvm_unexpected_el2_exception(void) > > > { > > > return __kvm_unexpected_el2_exception(); > > > -- > > > 2.35.1.1178.g4f1659d476-goog > > > > > > > > > > Thanks, > > > > M. > > > > -- > > Without deviation from the norm, progress is not possible. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC927C433F5 for ; Wed, 20 Apr 2022 21:51:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3C4F34B1D2; Wed, 20 Apr 2022 17:51:28 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4GZL2P75KT5m; Wed, 20 Apr 2022 17:51:27 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 059054B0FB; Wed, 20 Apr 2022 17:51:27 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7F46C4A193 for ; Wed, 20 Apr 2022 17:51:26 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uPqGBmcE0voD for ; Wed, 20 Apr 2022 17:51:25 -0400 (EDT) Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id EAED64A104 for ; Wed, 20 Apr 2022 17:51:24 -0400 (EDT) Received: by mail-wm1-f52.google.com with SMTP id ay11-20020a05600c1e0b00b0038eb92fa965so4599940wmb.4 for ; Wed, 20 Apr 2022 14:51:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rP5OWtrk4WktWLxKaOlM9djXhcx1PFOBsoG0vk8ejJ4=; b=IsIFYApUU0Ui4KekTPmRGr3ZeeQ6LFZW58iPNtRFHIdkWqkpazH/3PBI2H9j4xODGo yeto18wxSYb5ugdm7IdeP8jGmcSvKcDVJOTBELz870GpPOJ666CGkJPy9sLxDaTuktFY xCOfOimswxqF5K+eZrWyGXah1s826WKqxyZYZz3OTINFe6EXkkOT4pI9iY5ibMzZwYpx E6ZrPEYCqUQ1KA6ovQQ3fY4PezQSEm+xmse+M+WWD8f2I3WluGBCMLVOjcQSvf1t+81y T3d+QGfCtHfgvB9+vgeUCtciDXP1XcuJFvSPL4ojn+03j9LWdglXlCOIoD8C1frT0KiH j5cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rP5OWtrk4WktWLxKaOlM9djXhcx1PFOBsoG0vk8ejJ4=; b=MOwcw0ahLVq26HCmwEGa3HsGvdvbIaeOCuTirJRRzDJ1h3W+MzVUkWPuQAAB9LiK/h Xm+JD1PiyEEx1NlwvANjqKTwkz40wnXConli0tEx4WJVN2VqOZ58ShjA7aC6NdoqwFLW /vdZhYiDW2Jyuu2gBKonyxDDb3nLfInxEcZgbHHLCFTFFJD6oaFBerr25wuK/Y/yeV0L grXnMDJEVuNI8Vl1szQyRquQ0wjWIbfuKasmpHIokaMBzzWpoqizVuETJNT4dqjNq2ZQ tsMUrczQII0pOtNjtjRDbjuxOe0Y1+WzdKs8wq/ndsPrBHXWRcf/h9TazIyTVEwA0kVe An/A== X-Gm-Message-State: AOAM533ks7D5YwfwoprWix34ISNfx4RwWfz7mOT6eEzfTXj6N6ijC06x 1F9M2z7PF1hqmH0aBtvs3hbz0hFWfhYCNsOzVhVL+A== X-Google-Smtp-Source: ABdhPJwPjeINIU8CHmBNAe17pDAu6Ngpj5ab0xFuVeaOMxuTV2W3WzInKm3hX/Q/Wnmh0WyECF668faVm90Xu6UAEfc= X-Received: by 2002:a05:600c:4f87:b0:392:9236:3c73 with SMTP id n7-20020a05600c4f8700b0039292363c73mr5637940wmq.158.1650491483628; Wed, 20 Apr 2022 14:51:23 -0700 (PDT) MIME-Version: 1.0 References: <20220408200349.1529080-1-kaleshsingh@google.com> <20220408200349.1529080-6-kaleshsingh@google.com> <87v8v6aek0.wl-maz@kernel.org> In-Reply-To: From: Kalesh Singh Date: Wed, 20 Apr 2022 14:51:12 -0700 Message-ID: Subject: Re: [PATCH v7 5/6] KVM: arm64: Detect and handle hypervisor stack overflows To: Marc Zyngier Cc: Andrew Walbran , Will Deacon , "Cc: Android Kernel" , Masahiro Yamada , LKML , kvmarm , Nathan Chancellor , "moderated list:ARM64 PORT \(AARCH64 ARCHITECTURE\)" , Catalin Marinas , Suren Baghdasaryan X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Mon, Apr 18, 2022 at 7:41 PM Kalesh Singh wrote: > > On Mon, Apr 18, 2022 at 3:09 AM Marc Zyngier wrote: > > > > On Fri, 08 Apr 2022 21:03:28 +0100, > > Kalesh Singh wrote: > > > > > > The hypervisor stacks (for both nVHE Hyp mode and nVHE protected mode) > > > are aligned such that any valid stack address has PAGE_SHIFT bit as 1. > > > This allows us to conveniently check for overflow in the exception entry > > > without corrupting any GPRs. We won't recover from a stack overflow so > > > panic the hypervisor. > > > > > > Signed-off-by: Kalesh Singh > > > Tested-by: Fuad Tabba > > > Reviewed-by: Fuad Tabba > > > --- > > > > > > Changes in v7: > > > - Add Fuad's Reviewed-by and Tested-by tags. > > > > > > Changes in v5: > > > - Valid stack addresses now have PAGE_SHIFT bit as 1 instead of 0 > > > > > > Changes in v3: > > > - Remove test_sp_overflow macro, per Mark > > > - Add asmlinkage attribute for hyp_panic, hyp_panic_bad_stack, per Ard > > > > > > > > > arch/arm64/kvm/hyp/nvhe/host.S | 24 ++++++++++++++++++++++++ > > > arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++++++- > > > 2 files changed, 30 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S > > > index 3d613e721a75..be6d844279b1 100644 > > > --- a/arch/arm64/kvm/hyp/nvhe/host.S > > > +++ b/arch/arm64/kvm/hyp/nvhe/host.S > > > @@ -153,6 +153,18 @@ SYM_FUNC_END(__host_hvc) > > > > > > .macro invalid_host_el2_vect > > > .align 7 > > > + > > > + /* > > > + * Test whether the SP has overflowed, without corrupting a GPR. > > > + * nVHE hypervisor stacks are aligned so that the PAGE_SHIFT bit > > > + * of SP should always be 1. > > > + */ > > > + add sp, sp, x0 // sp' = sp + x0 > > > + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp > > > + tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@ > > > + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 > > > + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp > > > + > > > /* If a guest is loaded, panic out of it. */ > > > stp x0, x1, [sp, #-16]! > > > get_loaded_vcpu x0, x1 > > > @@ -165,6 +177,18 @@ SYM_FUNC_END(__host_hvc) > > > * been partially clobbered by __host_enter. > > > */ > > > b hyp_panic > > > + > > > +.L__hyp_sp_overflow\@: > > > + /* > > > + * Reset SP to the top of the stack, to allow handling the hyp_panic. > > > + * This corrupts the stack but is ok, since we won't be attempting > > > + * any unwinding here. > > > + */ > > > + ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 > > > + mov sp, x0 > > > + > > > + bl hyp_panic_bad_stack > > > > Why bl? You clearly don't expect to return here, given that you have > > an ASM_BUG() right below, and that you are calling a __no_return > > function. I think we should be consistent with the rest of the code > > and just do a simple branch. > > The idea was to use bl to give the hyp_panic_bad_stack() frame in the > stack trace, which makes it easy to identify overflows. I can add a > comment and drop the redundant ASM_BUG() Sorry, my mistake here: bl will give us the current frame in the stack trace (hyp_host_vector) so it doesn't affect hyp_panic_bad_stack (next frame) being in the strace trace. Addressed in v8: https://lore.kernel.org/r/20220420214317.3303360-6-kaleshsingh@google.com/ Thanks, Kalesh > > Thanks, > Kalesh > > > > > It also gives us a chance to preserve an extra register from the > > context. > > > > > + ASM_BUG() > > > .endm > > > > > > .macro invalid_host_el1_vect > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > > index 6410d21d8695..703a5d3f611b 100644 > > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > > @@ -347,7 +347,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > > return exit_code; > > > } > > > > > > -void __noreturn hyp_panic(void) > > > +asmlinkage void __noreturn hyp_panic(void) > > > { > > > u64 spsr = read_sysreg_el2(SYS_SPSR); > > > u64 elr = read_sysreg_el2(SYS_ELR); > > > @@ -369,6 +369,11 @@ void __noreturn hyp_panic(void) > > > unreachable(); > > > } > > > > > > +asmlinkage void __noreturn hyp_panic_bad_stack(void) > > > +{ > > > + hyp_panic(); > > > +} > > > + > > > asmlinkage void kvm_unexpected_el2_exception(void) > > > { > > > return __kvm_unexpected_el2_exception(); > > > -- > > > 2.35.1.1178.g4f1659d476-goog > > > > > > > > > > Thanks, > > > > M. > > > > -- > > Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3BE2AC433EF for ; Wed, 20 Apr 2022 21:52:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WwS9LQjqNcXLrnxMXq51TrLmUvxRwckQBxXa7JTJ6GY=; b=QBspYb2nVtDk0e qxnxXfbk8xIO3G90p7VvDgReYkSrgZ0iiiDd0d4eKLQ1bACocQB36PdqX4y97ca+A42z+NjIH4unx +iMyjc4Pzpxi8/4iNpf2GtebLeihLQP8CVWYKDA3Npkh30UQ/wEGawEUu3GEbK9t8O0I0iX69rGjZ w98uYSEC0Ah3yu0TcReUAy+Xjk0/PngKorperzzVZ64YgZciq8q/AMLNy4NMPbvFRGu+ovWzpX7iE bXo7TTxmy2RbroNclDmnuejbFLI4YkzDeCP0zzyfoWavi6R0ziNRW0nmm+zkmncQv9C28QLpQbYSQ ZBity1VfbQqaY8cQPwUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhIEX-00AWdK-IY; Wed, 20 Apr 2022 21:51:33 +0000 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhIET-00AWbf-4Z for linux-arm-kernel@lists.infradead.org; Wed, 20 Apr 2022 21:51:30 +0000 Received: by mail-wm1-x333.google.com with SMTP id m15-20020a7bca4f000000b0038fdc1394b1so4607630wml.2 for ; Wed, 20 Apr 2022 14:51:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rP5OWtrk4WktWLxKaOlM9djXhcx1PFOBsoG0vk8ejJ4=; b=IsIFYApUU0Ui4KekTPmRGr3ZeeQ6LFZW58iPNtRFHIdkWqkpazH/3PBI2H9j4xODGo yeto18wxSYb5ugdm7IdeP8jGmcSvKcDVJOTBELz870GpPOJ666CGkJPy9sLxDaTuktFY xCOfOimswxqF5K+eZrWyGXah1s826WKqxyZYZz3OTINFe6EXkkOT4pI9iY5ibMzZwYpx E6ZrPEYCqUQ1KA6ovQQ3fY4PezQSEm+xmse+M+WWD8f2I3WluGBCMLVOjcQSvf1t+81y T3d+QGfCtHfgvB9+vgeUCtciDXP1XcuJFvSPL4ojn+03j9LWdglXlCOIoD8C1frT0KiH j5cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rP5OWtrk4WktWLxKaOlM9djXhcx1PFOBsoG0vk8ejJ4=; b=nW6u71a9t3zej+s8sMkH0bKwgbZ/nQ0NdlP01SC9R7VcDHgff//KaQXCF3g/X/hTJ1 AqYUbcZ9+juEe/J+lVl2XVQzkefQdT11oRyyuxjRDwu7HRdBH66nZUcL+Dme/FIpeR9M RVwILJsGpISxcNtVnDKWEuDbURy9bAlIiPWhy4SXrPpbFymLsOJ3QLJxq1fdD6Ohxmcz OFlPYj9RIclYNcMeoM7r00XKLXdLk7XJF4ShAvtf5YajKmnUZJhLyNG9Dg/29Olp3lFK SRy3V44lXDIYfDxIP0hVPl6JnjT5HQ1WCLKNFcmZcbqKY0a0xMQatNwM7I1weZG8Vgrd N8sQ== X-Gm-Message-State: AOAM530QXL9IYt91vyVl0Q94JBO2tQ/oIsZ/kOlW6o0WejwbXgd35mDM s9SWeGfrbheULH8i1GyYJrhOMY5BXHNi8Z0gS4UViA== X-Google-Smtp-Source: ABdhPJwPjeINIU8CHmBNAe17pDAu6Ngpj5ab0xFuVeaOMxuTV2W3WzInKm3hX/Q/Wnmh0WyECF668faVm90Xu6UAEfc= X-Received: by 2002:a05:600c:4f87:b0:392:9236:3c73 with SMTP id n7-20020a05600c4f8700b0039292363c73mr5637940wmq.158.1650491483628; Wed, 20 Apr 2022 14:51:23 -0700 (PDT) MIME-Version: 1.0 References: <20220408200349.1529080-1-kaleshsingh@google.com> <20220408200349.1529080-6-kaleshsingh@google.com> <87v8v6aek0.wl-maz@kernel.org> In-Reply-To: From: Kalesh Singh Date: Wed, 20 Apr 2022 14:51:12 -0700 Message-ID: Subject: Re: [PATCH v7 5/6] KVM: arm64: Detect and handle hypervisor stack overflows To: Marc Zyngier Cc: Will Deacon , Quentin Perret , Fuad Tabba , Suren Baghdasaryan , "Cc: Android Kernel" , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Andrew Walbran , Mark Rutland , Ard Biesheuvel , Zenghui Yu , Andrew Jones , Nathan Chancellor , Masahiro Yamada , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , kvmarm , LKML X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220420_145129_226640_064652DC X-CRM114-Status: GOOD ( 35.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Apr 18, 2022 at 7:41 PM Kalesh Singh wrote: > > On Mon, Apr 18, 2022 at 3:09 AM Marc Zyngier wrote: > > > > On Fri, 08 Apr 2022 21:03:28 +0100, > > Kalesh Singh wrote: > > > > > > The hypervisor stacks (for both nVHE Hyp mode and nVHE protected mode) > > > are aligned such that any valid stack address has PAGE_SHIFT bit as 1. > > > This allows us to conveniently check for overflow in the exception entry > > > without corrupting any GPRs. We won't recover from a stack overflow so > > > panic the hypervisor. > > > > > > Signed-off-by: Kalesh Singh > > > Tested-by: Fuad Tabba > > > Reviewed-by: Fuad Tabba > > > --- > > > > > > Changes in v7: > > > - Add Fuad's Reviewed-by and Tested-by tags. > > > > > > Changes in v5: > > > - Valid stack addresses now have PAGE_SHIFT bit as 1 instead of 0 > > > > > > Changes in v3: > > > - Remove test_sp_overflow macro, per Mark > > > - Add asmlinkage attribute for hyp_panic, hyp_panic_bad_stack, per Ard > > > > > > > > > arch/arm64/kvm/hyp/nvhe/host.S | 24 ++++++++++++++++++++++++ > > > arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++++++- > > > 2 files changed, 30 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S > > > index 3d613e721a75..be6d844279b1 100644 > > > --- a/arch/arm64/kvm/hyp/nvhe/host.S > > > +++ b/arch/arm64/kvm/hyp/nvhe/host.S > > > @@ -153,6 +153,18 @@ SYM_FUNC_END(__host_hvc) > > > > > > .macro invalid_host_el2_vect > > > .align 7 > > > + > > > + /* > > > + * Test whether the SP has overflowed, without corrupting a GPR. > > > + * nVHE hypervisor stacks are aligned so that the PAGE_SHIFT bit > > > + * of SP should always be 1. > > > + */ > > > + add sp, sp, x0 // sp' = sp + x0 > > > + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp > > > + tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@ > > > + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 > > > + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp > > > + > > > /* If a guest is loaded, panic out of it. */ > > > stp x0, x1, [sp, #-16]! > > > get_loaded_vcpu x0, x1 > > > @@ -165,6 +177,18 @@ SYM_FUNC_END(__host_hvc) > > > * been partially clobbered by __host_enter. > > > */ > > > b hyp_panic > > > + > > > +.L__hyp_sp_overflow\@: > > > + /* > > > + * Reset SP to the top of the stack, to allow handling the hyp_panic. > > > + * This corrupts the stack but is ok, since we won't be attempting > > > + * any unwinding here. > > > + */ > > > + ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 > > > + mov sp, x0 > > > + > > > + bl hyp_panic_bad_stack > > > > Why bl? You clearly don't expect to return here, given that you have > > an ASM_BUG() right below, and that you are calling a __no_return > > function. I think we should be consistent with the rest of the code > > and just do a simple branch. > > The idea was to use bl to give the hyp_panic_bad_stack() frame in the > stack trace, which makes it easy to identify overflows. I can add a > comment and drop the redundant ASM_BUG() Sorry, my mistake here: bl will give us the current frame in the stack trace (hyp_host_vector) so it doesn't affect hyp_panic_bad_stack (next frame) being in the strace trace. Addressed in v8: https://lore.kernel.org/r/20220420214317.3303360-6-kaleshsingh@google.com/ Thanks, Kalesh > > Thanks, > Kalesh > > > > > It also gives us a chance to preserve an extra register from the > > context. > > > > > + ASM_BUG() > > > .endm > > > > > > .macro invalid_host_el1_vect > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > > index 6410d21d8695..703a5d3f611b 100644 > > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > > @@ -347,7 +347,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > > return exit_code; > > > } > > > > > > -void __noreturn hyp_panic(void) > > > +asmlinkage void __noreturn hyp_panic(void) > > > { > > > u64 spsr = read_sysreg_el2(SYS_SPSR); > > > u64 elr = read_sysreg_el2(SYS_ELR); > > > @@ -369,6 +369,11 @@ void __noreturn hyp_panic(void) > > > unreachable(); > > > } > > > > > > +asmlinkage void __noreturn hyp_panic_bad_stack(void) > > > +{ > > > + hyp_panic(); > > > +} > > > + > > > asmlinkage void kvm_unexpected_el2_exception(void) > > > { > > > return __kvm_unexpected_el2_exception(); > > > -- > > > 2.35.1.1178.g4f1659d476-goog > > > > > > > > > > Thanks, > > > > M. > > > > -- > > Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel