From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEF25C433F5 for ; Fri, 8 Apr 2022 20:06:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239365AbiDHUIJ (ORCPT ); Fri, 8 Apr 2022 16:08:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239371AbiDHUIC (ORCPT ); Fri, 8 Apr 2022 16:08:02 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDF34353A99 for ; Fri, 8 Apr 2022 13:05:55 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id t190-20020a25c3c7000000b006410799ab3dso2067616ybf.21 for ; Fri, 08 Apr 2022 13:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=pAGqEnG2raRkXveZrBOcJyHPFm0QVTMkhlJQFJA6T34=; b=fkf7yOt+xD7tPoLc32dR2/1ousTpprVrbdor4w8WJ28DKGW7L4u8ZSraUcycc4uKIx qIhFw4XjsCzjNrsqJZ7FDiR7kbhNdo9HlrECBWIGZQYEgklkOp6x/7lnhjh2nr7AkBsn C9IsV/kKQ4A+DEBXdbfdSnSJyDQG963lgzXVwwu0RFUrvvAINdb93EZG4Rz+364uJ/kC ffGjkLkbzJNk2CmRiYuzdfRe8LxLAzd8qX96szw4FY/GsE2/kqQg4DTyfouy123yi6Fo 3nuCQkMf/4aVhDA2jSGy/I52QR0VqL1WXxHhBpkGiKplOyrniE2sXUjiMpHULw7lt5t0 KcEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=pAGqEnG2raRkXveZrBOcJyHPFm0QVTMkhlJQFJA6T34=; b=x/0lZraI2AMGNwNJVVKGXv1N9s76Eraihw2Dz34XsXWlApiAy7EhzW8S3FgCLgTnXK 9cOKiNazHFKQza19boo8sladrg8HeOfufvc5tK7/cp/82hz03GwYGGxQY9Ot1n2idBAE erKVR+EDYTCMXZJq+zoyE7NVh9+vsJqOGmRUvsCEFfUsaMQX689qdg4PtRmSRKIRJSgB DSwCqAsTae6iu/vnwJSxzB92xN4QRJ5nTwrvtlsltcModiTf3noxB/MEk6xkk9E4eK10 MEecBCrtkSpIdLoUlPhYj6OfeTRsFVvXsLHkYsIXQvEX8FS+UUVtEb0Z2fWrMYixGz/Q t9+g== X-Gm-Message-State: AOAM531ijS3igHlo8rhYq1g+7KhekPZyumGUFE6U3k7qJEEfvsOz5ED6 SEWsAaXiWOXZA5fA0Bah5BBV/FdT2YguvD6mtQ== X-Google-Smtp-Source: ABdhPJw+e7eRfTOjt8ij1JblVkRTOiDGCo90TudLPv3hs+v+1VcmuHXfi9c5ryQLjBORNvuZhqGkr3+gBULrWSkxcA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f0ed:c8a:dab7:ecc2]) (user=kaleshsingh job=sendgmr) by 2002:a25:ae1c:0:b0:63d:4d85:acdc with SMTP id a28-20020a25ae1c000000b0063d4d85acdcmr15047111ybj.435.1649448355076; Fri, 08 Apr 2022 13:05:55 -0700 (PDT) Date: Fri, 8 Apr 2022 13:03:27 -0700 In-Reply-To: <20220408200349.1529080-1-kaleshsingh@google.com> Message-Id: <20220408200349.1529080-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220408200349.1529080-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v7 4/6] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Andrew Walbran , Mark Rutland , Ard Biesheuvel , Masahiro Yamada , Nathan Chancellor , Changbin Du , Nick Desaulniers , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series) Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v7: - Add Fuad's Reviewed-by and Tested-by tags. Changes in v6: - Update call to pkvm_alloc_private_va_range() (return val and params) Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that pkvm_alloc_private_va_range() returns an error for null pointer, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..e8d4ea2fcfa0 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, return ret; for (i = 0; i < hyp_nr_cpus; i++) { + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i); + unsigned long hyp_addr; + start = (void *)kern_hyp_va(per_cpu_base[i]); end = start + PAGE_ALIGN(hyp_percpu_size); ret = pkvm_create_mappings(start, end, PAGE_HYP); if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; - start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr); + if (ret) + return ret; + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + hyp_spin_lock(&pkvm_pgd_lock); + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, params->stack_pa, PAGE_HYP); + hyp_spin_unlock(&pkvm_pgd_lock); if (ret) return ret; + + /* Update stack_hyp_va to end of the stack's private VA range */ + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } /* -- 2.35.1.1178.g4f1659d476-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id D22D0C433F5 for ; Fri, 8 Apr 2022 20:05:58 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8B91C4B0D0; Fri, 8 Apr 2022 16:05:58 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id shbBwuEgOeTa; Fri, 8 Apr 2022 16:05:57 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6857B4B0D6; Fri, 8 Apr 2022 16:05:57 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7C9AE4B0D6 for ; Fri, 8 Apr 2022 16:05:56 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8WVkoSYkNRux for ; Fri, 8 Apr 2022 16:05:55 -0400 (EDT) Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 7741A4B0D0 for ; Fri, 8 Apr 2022 16:05:55 -0400 (EDT) Received: by mail-yb1-f201.google.com with SMTP id b65-20020a25e444000000b0063dd00480f8so7561006ybh.13 for ; Fri, 08 Apr 2022 13:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=pAGqEnG2raRkXveZrBOcJyHPFm0QVTMkhlJQFJA6T34=; b=fkf7yOt+xD7tPoLc32dR2/1ousTpprVrbdor4w8WJ28DKGW7L4u8ZSraUcycc4uKIx qIhFw4XjsCzjNrsqJZ7FDiR7kbhNdo9HlrECBWIGZQYEgklkOp6x/7lnhjh2nr7AkBsn C9IsV/kKQ4A+DEBXdbfdSnSJyDQG963lgzXVwwu0RFUrvvAINdb93EZG4Rz+364uJ/kC ffGjkLkbzJNk2CmRiYuzdfRe8LxLAzd8qX96szw4FY/GsE2/kqQg4DTyfouy123yi6Fo 3nuCQkMf/4aVhDA2jSGy/I52QR0VqL1WXxHhBpkGiKplOyrniE2sXUjiMpHULw7lt5t0 KcEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=pAGqEnG2raRkXveZrBOcJyHPFm0QVTMkhlJQFJA6T34=; b=KAWx1IJ1wk4kW9pd1iB5AFfw81JcHxapVJqtzAx7jViZEH5CF8Zpe4GiSH18jS70wE cMx0oZvQwLlfDwL/PrLTfJhF6654HulBq7svyaQnhoc10iysjt5BprCLiyQjN00WED6Q VQXmLuVNK/SNGf44P3Hpe1u3e9Tk34zuDoHKZ2LRY9pTYGukw6wDlM9hH/kVXG+ZIj06 fY3MI4BF9xtMpAH1xj/87AlWc2HwfTopBBouDdWnckqduOH4M1Jr+heLr2jWXtUj6dG9 jcVhzd1ISVh+Ix1o5yeGt31DbfentJRErUEVfffH9zjFry5PhqH7nai/fDtU3e26aeaC GRUw== X-Gm-Message-State: AOAM532DbZb3Gvdc17csGBeMMqXZMVcIvF6B433LPgaPrVKj5rm6gcQ3 /WhhWAcXQlRiygrdCvV1HZBoe9Pj89YsiEM9ag== X-Google-Smtp-Source: ABdhPJw+e7eRfTOjt8ij1JblVkRTOiDGCo90TudLPv3hs+v+1VcmuHXfi9c5ryQLjBORNvuZhqGkr3+gBULrWSkxcA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f0ed:c8a:dab7:ecc2]) (user=kaleshsingh job=sendgmr) by 2002:a25:ae1c:0:b0:63d:4d85:acdc with SMTP id a28-20020a25ae1c000000b0063d4d85acdcmr15047111ybj.435.1649448355076; Fri, 08 Apr 2022 13:05:55 -0700 (PDT) Date: Fri, 8 Apr 2022 13:03:27 -0700 In-Reply-To: <20220408200349.1529080-1-kaleshsingh@google.com> Message-Id: <20220408200349.1529080-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220408200349.1529080-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v7 4/6] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack From: Kalesh Singh Cc: Nick Desaulniers , kernel-team@android.com, Catalin Marinas , Andrew Walbran , will@kernel.org, maz@kernel.org, Masahiro Yamada , linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, Nathan Chancellor , Changbin Du , linux-arm-kernel@lists.infradead.org, surenb@google.com X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series) Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v7: - Add Fuad's Reviewed-by and Tested-by tags. Changes in v6: - Update call to pkvm_alloc_private_va_range() (return val and params) Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that pkvm_alloc_private_va_range() returns an error for null pointer, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..e8d4ea2fcfa0 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, return ret; for (i = 0; i < hyp_nr_cpus; i++) { + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i); + unsigned long hyp_addr; + start = (void *)kern_hyp_va(per_cpu_base[i]); end = start + PAGE_ALIGN(hyp_percpu_size); ret = pkvm_create_mappings(start, end, PAGE_HYP); if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; - start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr); + if (ret) + return ret; + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + hyp_spin_lock(&pkvm_pgd_lock); + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, params->stack_pa, PAGE_HYP); + hyp_spin_unlock(&pkvm_pgd_lock); if (ret) return ret; + + /* Update stack_hyp_va to end of the stack's private VA range */ + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } /* -- 2.35.1.1178.g4f1659d476-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D750C433F5 for ; Fri, 8 Apr 2022 20:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yA0MimsvTp13xuVSMd6qYjt6LSDnQXEiU3U2IxOdIO0=; b=ep77jgHB/Nf6z1 zpgTj9N84i4Sf6F/lkA5pSKiPGJVvbzpJ1QOWmjYPmypVsETGAHFh/gqUjkskxQKMu0XpJs/34lFw gwjZxkTXHVWdVkqDt70pDgCT2cLYjAqj4mVOMnGolSHPCjQpk+mVG8jnuQ/7yfsQ2pwqcRm0FgKr/ 9SGBYEVuMMZp3dvUrnieSXo0a+/cdltVc6i9iaL8vE8HNA9hGZOAowLEk9K4LJAbwftMy4FTPbe+n l6zhu7nf5xos5t/wM/UAhVN1zLiE+8zOkCL0FBiJYkWmRcolpgXJIM6ApkSKn7zl1uvr5ksi6/iFq jTKAhs8awr0NbKzVr1kg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ncuro-0018gT-9N; Fri, 08 Apr 2022 20:06:00 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ncurk-0018ek-I1 for linux-arm-kernel@lists.infradead.org; Fri, 08 Apr 2022 20:05:58 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2e644c76556so85006127b3.15 for ; Fri, 08 Apr 2022 13:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=pAGqEnG2raRkXveZrBOcJyHPFm0QVTMkhlJQFJA6T34=; b=fkf7yOt+xD7tPoLc32dR2/1ousTpprVrbdor4w8WJ28DKGW7L4u8ZSraUcycc4uKIx qIhFw4XjsCzjNrsqJZ7FDiR7kbhNdo9HlrECBWIGZQYEgklkOp6x/7lnhjh2nr7AkBsn C9IsV/kKQ4A+DEBXdbfdSnSJyDQG963lgzXVwwu0RFUrvvAINdb93EZG4Rz+364uJ/kC ffGjkLkbzJNk2CmRiYuzdfRe8LxLAzd8qX96szw4FY/GsE2/kqQg4DTyfouy123yi6Fo 3nuCQkMf/4aVhDA2jSGy/I52QR0VqL1WXxHhBpkGiKplOyrniE2sXUjiMpHULw7lt5t0 KcEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=pAGqEnG2raRkXveZrBOcJyHPFm0QVTMkhlJQFJA6T34=; b=s6m5TSmcQALw5cTRF7Da3pa3iNkNj11+qx7PpRLWIBSLql7MyKR3Ah0tKwzQX/L/Ed oATfrGQIxzTinhaCUFylbJDGrAV3pwkPIlK5sDY3xlv1IY3rc/jLzLR+Zkk0+/oSadLt d+Qxy4jpyiK3JKcKjZD5DfYrY/mIoOXx6OXadcuiG/GzB0h0Z3QSmF2/vPksOj4bqY1W UL6gp6g3ugTBtg8sjnP2EY44Ff6UUn4idI6wI60Xfd+iXHZS5AUhMDIAtVUfO/CX/2Zz G8QOf1+BG3WX7tzO8/E1PTetQBe5NAmEddU8Df7JacLObx/hwXx6rVl/5Cp4aY0cQaaE PPAw== X-Gm-Message-State: AOAM530afq9t87rZvxh6HPZ+gr5hsEu68WCz8SlxeniRCTtlR8k7TItB 0wxsaSYG4JO9BV9xsMT9QnU65ghW+0Cz1VOy3Q== X-Google-Smtp-Source: ABdhPJw+e7eRfTOjt8ij1JblVkRTOiDGCo90TudLPv3hs+v+1VcmuHXfi9c5ryQLjBORNvuZhqGkr3+gBULrWSkxcA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f0ed:c8a:dab7:ecc2]) (user=kaleshsingh job=sendgmr) by 2002:a25:ae1c:0:b0:63d:4d85:acdc with SMTP id a28-20020a25ae1c000000b0063d4d85acdcmr15047111ybj.435.1649448355076; Fri, 08 Apr 2022 13:05:55 -0700 (PDT) Date: Fri, 8 Apr 2022 13:03:27 -0700 In-Reply-To: <20220408200349.1529080-1-kaleshsingh@google.com> Message-Id: <20220408200349.1529080-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220408200349.1529080-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH v7 4/6] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Andrew Walbran , Mark Rutland , Ard Biesheuvel , Masahiro Yamada , Nathan Chancellor , Changbin Du , Nick Desaulniers , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220408_130556_650917_53F0F76A X-CRM114-Status: GOOD ( 17.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series) Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v7: - Add Fuad's Reviewed-by and Tested-by tags. Changes in v6: - Update call to pkvm_alloc_private_va_range() (return val and params) Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that pkvm_alloc_private_va_range() returns an error for null pointer, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..e8d4ea2fcfa0 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, return ret; for (i = 0; i < hyp_nr_cpus; i++) { + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i); + unsigned long hyp_addr; + start = (void *)kern_hyp_va(per_cpu_base[i]); end = start + PAGE_ALIGN(hyp_percpu_size); ret = pkvm_create_mappings(start, end, PAGE_HYP); if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; - start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr); + if (ret) + return ret; + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + hyp_spin_lock(&pkvm_pgd_lock); + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, params->stack_pa, PAGE_HYP); + hyp_spin_unlock(&pkvm_pgd_lock); if (ret) return ret; + + /* Update stack_hyp_va to end of the stack's private VA range */ + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } /* -- 2.35.1.1178.g4f1659d476-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel