From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33E39C433EF for ; Tue, 22 Feb 2022 20:30:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234844AbiBVUaw (ORCPT ); Tue, 22 Feb 2022 15:30:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231694AbiBVUat (ORCPT ); Tue, 22 Feb 2022 15:30:49 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCCC1B153A for ; Tue, 22 Feb 2022 12:30:23 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id f17so10801323wrh.7 for ; Tue, 22 Feb 2022 12:30:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=hbNMa9MZ9iJNQgmN4fGGpGPNabVhNX9ToIjlcX39Q/sK5fiV2J2eGt24KeSKBiJZGx DwkfyOmMCVUm/WfBrt3me5yTRPrLGRISo2XZK0sZ0FqgdqzPxjNyk8nhQx7xVb0psNyG FhpiEgcDuIrKz1xWjSrzscir+5dQctS83sItdwFetXJU04NyvT28b19BKAc+WRaSr4VL pM7fEN6ozvC07qhjKiQ1lp22NV7p/SM7Exs4KMi9Ac2hkE6tmg8Gfq/yzGOhkHJuxoyl VeCE6GoL8KwFBQMAwMdOoLZ0z2k2G0glacuMwspNZVJup4D1BUFstEUBWUD31hjl+Dpv ZISw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=4o3EUiJk4l4fOLhDHt6k3D/wx2SkqBRpUVwBoo1rpllJNPwTdoiXqjFUjLLYV5fxes 7iCxEhf7vkgd5NIqCKcY5Y7YIXHXAJzBGhUfqVwNWPyK+Q9wVtTFYIW7SLn76XUt6C9H JMsexfz3HFtNfYLa8dKqWsilDvPKe2d+TDUDTLsfMnKZ+z7/VEdtsC5vgw54UWCghq5i NBPvUX/lLD7sJ31/B8JRyTrSpK+p3hQEe+Iz3Zs2YnPn8LcE5ejEZxGav+HuZ4KaZAiM AmME6e/TwmIbS/0toocJY/C+xXJXVbcs89/KWDNJTRFD7tsGqVwfmMTNyrXy/zVyJrgN v/tw== X-Gm-Message-State: AOAM533M4U3wCTD6NcprAk1r/GoO8ugjKhiJhfmINwwzJLorMc/QZh3e Xfp4GXvHwLh/hu1/wKyErPflhKSCreIft3I9T5Oux++86w8= X-Google-Smtp-Source: ABdhPJxomxRQYTPPF7tW+jgDdOTc5ewiGXelXfxHh2vVMN5OcnyAqrPnJg07kVWIlZWOyQ63eBOlqeJBhlNML7EE+B0= X-Received: by 2002:a05:6000:1684:b0:1ea:8651:56fe with SMTP id y4-20020a056000168400b001ea865156femr4059311wrd.577.1645561822103; Tue, 22 Feb 2022 12:30:22 -0800 (PST) MIME-Version: 1.0 References: <20220222165212.2005066-1-kaleshsingh@google.com> <20220222165212.2005066-5-kaleshsingh@google.com> In-Reply-To: From: Kalesh Singh Date: Tue, 22 Feb 2022 12:30:11 -0800 Message-ID: Subject: Re: [PATCH v2 4/9] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack To: Mark Rutland Cc: Will Deacon , Marc Zyngier , Quentin Perret , Fuad Tabba , Suren Baghdasaryan , "Cc: Android Kernel" , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Scull , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , LKML , kvmarm@lists.cs.columbia.edu Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 22, 2022 at 10:55 AM Mark Rutland wrote: > > On Tue, Feb 22, 2022 at 08:51:05AM -0800, Kalesh Singh wrote: > > Maps the stack pages in the flexible private VA range and allocates > > guard pages below the stack as unbacked VA space. The stack is aligned > > to twice its size to aid overflow detection (implemented in a subsequent > > patch in the series). > > > > Signed-off-by: Kalesh Singh > > --- > > arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++++++++++++++++---- > > 1 file changed, 21 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c > > index 27af337f9fea..69df21320b09 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/setup.c > > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c > > @@ -105,11 +105,28 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, > > if (ret) > > return ret; > > > > - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; > > + /* > > + * Private mappings are allocated upwards from __io_map_base > > + * so allocate the guard page first then the stack. > > + */ > > + start = (void *)pkvm_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > As on a prior patch, this usage of PTR_ERR() pattern is wrong when the > ptr is NULL. Ack. I'll fix these in the next version. Thanks, Kalesh > > > + /* > > + * The stack is aligned to twice its size to facilitate overflow > > + * detection. > > + */ > > + end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_pa; > > start = end - PAGE_SIZE; > > - ret = pkvm_create_mappings(start, end, PAGE_HYP); > > - if (ret) > > - return ret; > > + start = (void *)__pkvm_create_private_mapping((phys_addr_t)start, > > + PAGE_SIZE, PAGE_SIZE * 2, PAGE_HYP); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > Likewise. > > Thanks, > Mark. > > > + end = start + PAGE_SIZE; > > + > > + /* Update stack_hyp_va to end of the stack's private VA range */ > > + per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end; > > } > > > > /* > > -- > > 2.35.1.473.g83b2b277ed-goog > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA44BC433F5 for ; Tue, 22 Feb 2022 20:31:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yMpHFuLnwaoyPD2XW2sUi+Xs5YCSJQ3xpKOvcCA+Dp0=; b=oOlf4Wq0JqScUH qkZOHhwCJ9d3lWmq3nJUwO+64h5faCh+8xD29MNhrPowIQNdQ6cZo0USpB9goKTzFTZLXgZGhf+1H jMuG+2d9GSey0wg4u+OGMJMzFQvEfIvpfROrGBmEETQJIIi1AXEKjQcCJKN11XTVrOqb/D1egmoIO v6TS0VdLZeSOe+r2+s50ybW7oxNxmcx+vp8HsJg7cgzEpO4LcV/EVMKCtMlAIrH9b1dZHPAfXtA4M TfJoo2EBoMK73/DJZFzovApw5Cq4tJPKnVX3Eqi+H/wwrp6VY0CT2MvelCn2fTBxz/SyF5+sBPUoy PTFWCpjDJQ3vfmdKIFIA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nMbnn-00BX4o-Ld; Tue, 22 Feb 2022 20:30:27 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nMbnj-00BX3w-NC for linux-arm-kernel@lists.infradead.org; Tue, 22 Feb 2022 20:30:25 +0000 Received: by mail-wr1-x429.google.com with SMTP id s13so7121979wrb.6 for ; Tue, 22 Feb 2022 12:30:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=hbNMa9MZ9iJNQgmN4fGGpGPNabVhNX9ToIjlcX39Q/sK5fiV2J2eGt24KeSKBiJZGx DwkfyOmMCVUm/WfBrt3me5yTRPrLGRISo2XZK0sZ0FqgdqzPxjNyk8nhQx7xVb0psNyG FhpiEgcDuIrKz1xWjSrzscir+5dQctS83sItdwFetXJU04NyvT28b19BKAc+WRaSr4VL pM7fEN6ozvC07qhjKiQ1lp22NV7p/SM7Exs4KMi9Ac2hkE6tmg8Gfq/yzGOhkHJuxoyl VeCE6GoL8KwFBQMAwMdOoLZ0z2k2G0glacuMwspNZVJup4D1BUFstEUBWUD31hjl+Dpv ZISw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=gxzHw2+m4FbLR0K6axm92QUKN0X67DWVt0S3lKJNfMa6e+7vXttB0/qTpSjYAL9z21 UjImJHU/45FAKQZz4vj6+wAzoWPmra+wJ+4Ksa7Kkr6aF8Wy6xuA/vpyEsrQf5RL5J37 2wMNSJ6wcRRORHs5C+bcw7rR9db4bqL2UepvDqyI/vY2iw9bVtr+CdAUdg1DXTwg0vUS FBq1y14dcxJG5HomqKgyM0gqw3d04TP6MFV5HdK9LAEKEEBQdVK/Xj6MiNU5tcZKzVQh BJhfMF75EeTo+3dYwRMu9zuLnckaRw+BXbRKr3FHSXMw9PdNN6Eda4XPezrIbCG92hEL vctw== X-Gm-Message-State: AOAM5330Tieb3ZimdqNbd46r5RCLFSVX0oNayP1bppb3JRUxR38LpFa1 K8FY3M7CUsGzGZ779otE+q6RMEIr4L6QSwi79Oq3Vg== X-Google-Smtp-Source: ABdhPJxomxRQYTPPF7tW+jgDdOTc5ewiGXelXfxHh2vVMN5OcnyAqrPnJg07kVWIlZWOyQ63eBOlqeJBhlNML7EE+B0= X-Received: by 2002:a05:6000:1684:b0:1ea:8651:56fe with SMTP id y4-20020a056000168400b001ea865156femr4059311wrd.577.1645561822103; Tue, 22 Feb 2022 12:30:22 -0800 (PST) MIME-Version: 1.0 References: <20220222165212.2005066-1-kaleshsingh@google.com> <20220222165212.2005066-5-kaleshsingh@google.com> In-Reply-To: From: Kalesh Singh Date: Tue, 22 Feb 2022 12:30:11 -0800 Message-ID: Subject: Re: [PATCH v2 4/9] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack To: Mark Rutland Cc: Will Deacon , Marc Zyngier , Quentin Perret , Fuad Tabba , Suren Baghdasaryan , "Cc: Android Kernel" , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Scull , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , LKML , kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220222_123023_787695_7D323D44 X-CRM114-Status: GOOD ( 24.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Feb 22, 2022 at 10:55 AM Mark Rutland wrote: > > On Tue, Feb 22, 2022 at 08:51:05AM -0800, Kalesh Singh wrote: > > Maps the stack pages in the flexible private VA range and allocates > > guard pages below the stack as unbacked VA space. The stack is aligned > > to twice its size to aid overflow detection (implemented in a subsequent > > patch in the series). > > > > Signed-off-by: Kalesh Singh > > --- > > arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++++++++++++++++---- > > 1 file changed, 21 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c > > index 27af337f9fea..69df21320b09 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/setup.c > > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c > > @@ -105,11 +105,28 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, > > if (ret) > > return ret; > > > > - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; > > + /* > > + * Private mappings are allocated upwards from __io_map_base > > + * so allocate the guard page first then the stack. > > + */ > > + start = (void *)pkvm_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > As on a prior patch, this usage of PTR_ERR() pattern is wrong when the > ptr is NULL. Ack. I'll fix these in the next version. Thanks, Kalesh > > > + /* > > + * The stack is aligned to twice its size to facilitate overflow > > + * detection. > > + */ > > + end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_pa; > > start = end - PAGE_SIZE; > > - ret = pkvm_create_mappings(start, end, PAGE_HYP); > > - if (ret) > > - return ret; > > + start = (void *)__pkvm_create_private_mapping((phys_addr_t)start, > > + PAGE_SIZE, PAGE_SIZE * 2, PAGE_HYP); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > Likewise. > > Thanks, > Mark. > > > + end = start + PAGE_SIZE; > > + > > + /* Update stack_hyp_va to end of the stack's private VA range */ > > + per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end; > > } > > > > /* > > -- > > 2.35.1.473.g83b2b277ed-goog > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AA9DC433EF for ; Wed, 23 Feb 2022 11:02:45 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D94784C47B; Wed, 23 Feb 2022 06:02:44 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xCEogSxcRZd6; Wed, 23 Feb 2022 06:02:43 -0500 (EST) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 416654C47C; Wed, 23 Feb 2022 06:02:32 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 898F149E2A for ; Tue, 22 Feb 2022 15:30:24 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id B+au3EYqMiux for ; Tue, 22 Feb 2022 15:30:23 -0500 (EST) Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 61C7E40FD8 for ; Tue, 22 Feb 2022 15:30:23 -0500 (EST) Received: by mail-wr1-f53.google.com with SMTP id j17so8962174wrc.0 for ; Tue, 22 Feb 2022 12:30:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=hbNMa9MZ9iJNQgmN4fGGpGPNabVhNX9ToIjlcX39Q/sK5fiV2J2eGt24KeSKBiJZGx DwkfyOmMCVUm/WfBrt3me5yTRPrLGRISo2XZK0sZ0FqgdqzPxjNyk8nhQx7xVb0psNyG FhpiEgcDuIrKz1xWjSrzscir+5dQctS83sItdwFetXJU04NyvT28b19BKAc+WRaSr4VL pM7fEN6ozvC07qhjKiQ1lp22NV7p/SM7Exs4KMi9Ac2hkE6tmg8Gfq/yzGOhkHJuxoyl VeCE6GoL8KwFBQMAwMdOoLZ0z2k2G0glacuMwspNZVJup4D1BUFstEUBWUD31hjl+Dpv ZISw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=tLhAVzSggJ9IQl9dpNHWjXeQU3MnXxSMgVe8agCv8nttfyCP0CEu/C+v9LMP0R1QHP ePGHLHUMb8QlzTOuKiRQ5i+j5iUDpf+cwSY+PHqt0NAT4MfOh5ROIQ+OpvRyrI65yj+N luhUMrWaygQDt2e71KYSdykKWU3IBvtU7x+kEtdpUi1wV4Xbuh8woVTaUW2wPRnVXejK yi8OsEketi3FpAOAr42s0qIEKB2NolYxElEnlOMU1WqbNo1guK2GNLC3nCpyzWWOFoWL HXYAc1Li7nXmkSHxHESfIoMXpoW362e/Y+UdvIInyYTatkbZ7sGZAwsq8hnJPnHpFYAb k29Q== X-Gm-Message-State: AOAM533FAXraEoqdWsAb2d8JQlgYe+bktrscZPT4w6Vn7XL3vWMZsGWc EIHUuIT7e+b3KQPzUlfJacu1JrKZh6aXAeqt8g2p3w== X-Google-Smtp-Source: ABdhPJxomxRQYTPPF7tW+jgDdOTc5ewiGXelXfxHh2vVMN5OcnyAqrPnJg07kVWIlZWOyQ63eBOlqeJBhlNML7EE+B0= X-Received: by 2002:a05:6000:1684:b0:1ea:8651:56fe with SMTP id y4-20020a056000168400b001ea865156femr4059311wrd.577.1645561822103; Tue, 22 Feb 2022 12:30:22 -0800 (PST) MIME-Version: 1.0 References: <20220222165212.2005066-1-kaleshsingh@google.com> <20220222165212.2005066-5-kaleshsingh@google.com> In-Reply-To: From: Kalesh Singh Date: Tue, 22 Feb 2022 12:30:11 -0800 Message-ID: Subject: Re: [PATCH v2 4/9] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack To: Mark Rutland X-Mailman-Approved-At: Wed, 23 Feb 2022 06:02:29 -0500 Cc: "moderated list:ARM64 PORT \(AARCH64 ARCHITECTURE\)" , "Cc: Android Kernel" , Pasha Tatashin , Will Deacon , Peter Collingbourne , Marc Zyngier , LKML , Joey Gouly , kvmarm@lists.cs.columbia.edu, Catalin Marinas , Suren Baghdasaryan X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Tue, Feb 22, 2022 at 10:55 AM Mark Rutland wrote: > > On Tue, Feb 22, 2022 at 08:51:05AM -0800, Kalesh Singh wrote: > > Maps the stack pages in the flexible private VA range and allocates > > guard pages below the stack as unbacked VA space. The stack is aligned > > to twice its size to aid overflow detection (implemented in a subsequent > > patch in the series). > > > > Signed-off-by: Kalesh Singh > > --- > > arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++++++++++++++++---- > > 1 file changed, 21 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c > > index 27af337f9fea..69df21320b09 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/setup.c > > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c > > @@ -105,11 +105,28 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, > > if (ret) > > return ret; > > > > - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; > > + /* > > + * Private mappings are allocated upwards from __io_map_base > > + * so allocate the guard page first then the stack. > > + */ > > + start = (void *)pkvm_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > As on a prior patch, this usage of PTR_ERR() pattern is wrong when the > ptr is NULL. Ack. I'll fix these in the next version. Thanks, Kalesh > > > + /* > > + * The stack is aligned to twice its size to facilitate overflow > > + * detection. > > + */ > > + end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_pa; > > start = end - PAGE_SIZE; > > - ret = pkvm_create_mappings(start, end, PAGE_HYP); > > - if (ret) > > - return ret; > > + start = (void *)__pkvm_create_private_mapping((phys_addr_t)start, > > + PAGE_SIZE, PAGE_SIZE * 2, PAGE_HYP); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > Likewise. > > Thanks, > Mark. > > > + end = start + PAGE_SIZE; > > + > > + /* Update stack_hyp_va to end of the stack's private VA range */ > > + per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end; > > } > > > > /* > > -- > > 2.35.1.473.g83b2b277ed-goog > > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm