From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33E39C433EF for ; Tue, 22 Feb 2022 20:30:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234844AbiBVUaw (ORCPT ); Tue, 22 Feb 2022 15:30:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231694AbiBVUat (ORCPT ); Tue, 22 Feb 2022 15:30:49 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCCC1B153A for ; Tue, 22 Feb 2022 12:30:23 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id f17so10801323wrh.7 for ; Tue, 22 Feb 2022 12:30:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=hbNMa9MZ9iJNQgmN4fGGpGPNabVhNX9ToIjlcX39Q/sK5fiV2J2eGt24KeSKBiJZGx DwkfyOmMCVUm/WfBrt3me5yTRPrLGRISo2XZK0sZ0FqgdqzPxjNyk8nhQx7xVb0psNyG FhpiEgcDuIrKz1xWjSrzscir+5dQctS83sItdwFetXJU04NyvT28b19BKAc+WRaSr4VL pM7fEN6ozvC07qhjKiQ1lp22NV7p/SM7Exs4KMi9Ac2hkE6tmg8Gfq/yzGOhkHJuxoyl VeCE6GoL8KwFBQMAwMdOoLZ0z2k2G0glacuMwspNZVJup4D1BUFstEUBWUD31hjl+Dpv ZISw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=D/GUDwwIIoMRy8+dHgUhur8ijOgSxklxDJwa/sFpEOU=; b=4o3EUiJk4l4fOLhDHt6k3D/wx2SkqBRpUVwBoo1rpllJNPwTdoiXqjFUjLLYV5fxes 7iCxEhf7vkgd5NIqCKcY5Y7YIXHXAJzBGhUfqVwNWPyK+Q9wVtTFYIW7SLn76XUt6C9H JMsexfz3HFtNfYLa8dKqWsilDvPKe2d+TDUDTLsfMnKZ+z7/VEdtsC5vgw54UWCghq5i NBPvUX/lLD7sJ31/B8JRyTrSpK+p3hQEe+Iz3Zs2YnPn8LcE5ejEZxGav+HuZ4KaZAiM AmME6e/TwmIbS/0toocJY/C+xXJXVbcs89/KWDNJTRFD7tsGqVwfmMTNyrXy/zVyJrgN v/tw== X-Gm-Message-State: AOAM533M4U3wCTD6NcprAk1r/GoO8ugjKhiJhfmINwwzJLorMc/QZh3e Xfp4GXvHwLh/hu1/wKyErPflhKSCreIft3I9T5Oux++86w8= X-Google-Smtp-Source: ABdhPJxomxRQYTPPF7tW+jgDdOTc5ewiGXelXfxHh2vVMN5OcnyAqrPnJg07kVWIlZWOyQ63eBOlqeJBhlNML7EE+B0= X-Received: by 2002:a05:6000:1684:b0:1ea:8651:56fe with SMTP id y4-20020a056000168400b001ea865156femr4059311wrd.577.1645561822103; Tue, 22 Feb 2022 12:30:22 -0800 (PST) MIME-Version: 1.0 References: <20220222165212.2005066-1-kaleshsingh@google.com> <20220222165212.2005066-5-kaleshsingh@google.com> In-Reply-To: From: Kalesh Singh Date: Tue, 22 Feb 2022 12:30:11 -0800 Message-ID: Subject: Re: [PATCH v2 4/9] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack To: Mark Rutland Cc: Will Deacon , Marc Zyngier , Quentin Perret , Fuad Tabba , Suren Baghdasaryan , "Cc: Android Kernel" , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Scull , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , LKML , kvmarm@lists.cs.columbia.edu Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 22, 2022 at 10:55 AM Mark Rutland wrote: > > On Tue, Feb 22, 2022 at 08:51:05AM -0800, Kalesh Singh wrote: > > Maps the stack pages in the flexible private VA range and allocates > > guard pages below the stack as unbacked VA space. The stack is aligned > > to twice its size to aid overflow detection (implemented in a subsequent > > patch in the series). > > > > Signed-off-by: Kalesh Singh > > --- > > arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++++++++++++++++---- > > 1 file changed, 21 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c > > index 27af337f9fea..69df21320b09 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/setup.c > > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c > > @@ -105,11 +105,28 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, > > if (ret) > > return ret; > > > > - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; > > + /* > > + * Private mappings are allocated upwards from __io_map_base > > + * so allocate the guard page first then the stack. > > + */ > > + start = (void *)pkvm_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > As on a prior patch, this usage of PTR_ERR() pattern is wrong when the > ptr is NULL. Ack. I'll fix these in the next version. Thanks, Kalesh > > > + /* > > + * The stack is aligned to twice its size to facilitate overflow > > + * detection. > > + */ > > + end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_pa; > > start = end - PAGE_SIZE; > > - ret = pkvm_create_mappings(start, end, PAGE_HYP); > > - if (ret) > > - return ret; > > + start = (void *)__pkvm_create_private_mapping((phys_addr_t)start, > > + PAGE_SIZE, PAGE_SIZE * 2, PAGE_HYP); > > + if (IS_ERR_OR_NULL(start)) > > + return PTR_ERR(start); > > Likewise. > > Thanks, > Mark. > > > + end = start + PAGE_SIZE; > > + > > + /* Update stack_hyp_va to end of the stack's private VA range */ > > + per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end; > > } > > > > /* > > -- > > 2.35.1.473.g83b2b277ed-goog > >