From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8698EC4338F for ; Fri, 6 Aug 2021 13:40:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60B3C6113C for ; Fri, 6 Aug 2021 13:40:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344391AbhHFNk2 (ORCPT ); Fri, 6 Aug 2021 09:40:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344393AbhHFNkW (ORCPT ); Fri, 6 Aug 2021 09:40:22 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC2D5C0617A0 for ; Fri, 6 Aug 2021 06:40:05 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id l11-20020a7bcf0b0000b0290253545c2997so6158124wmg.4 for ; Fri, 06 Aug 2021 06:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=fp2FsksUpuC70lwZPSSi0nDjJByoDoO4PpzbqKW0cT4=; b=icR9Int0QvvI0heodeFJD0pbyAXz7QjoptO/0cpBITayVgP+KEmgyE3q91jHhPzHVI lYJw8LFsckdwvdFPQMEtHdnm3Uf2LDM4NItv569axeYl+AjCA/t626QzX0lO2iplkG6N 3OaHzooQYDQ9QfMyjYg3BI9lRYZHyNtcOvtmx9sOx+krNFV2uVxVblsGpTqYFwh7JPB+ x3mc5O8XuPLPW3G8h8kPgmvmBwxG3WoV5/wuAYY+egCFkWssTPFoUnJX22FHO0pnWuVF 2SX/pJ8mI73ryLUNvPXmnC/EpD57Fp42Ag08haakgb2+h9WcVcpSATaZtkupBV0rDt4X /W0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=fp2FsksUpuC70lwZPSSi0nDjJByoDoO4PpzbqKW0cT4=; b=mkmH0UpEsecHpyVS6oZxU1FHDAfiCRWnOc2gd8DfAeSfVCMA7ks3PK8WcHSIEa7Y42 ZsJaqCQK7ifBLUvizwDs10VNTZu9xOfqzf0ZSNJLIjcn4K3pwDrAlEqbEhWDB+ewZuMR PIjO1lkkg9WDcu1NiX55HfuFq5/P98qbA3tvaoPj0Fa6CxxW7CU+h3ERFbH0mKEPt7eV +ZEd0EDmmtWyMwkqNjLrZ88Ui7rdMccqUiqHxDwjViv8jwvtDo5uCQKozFWDrQq9jzjb p8+wXmCAR9ohZKLsvqDd/Zmq6f4Spz3VNJ6gj7o3nwdxfJPF5MVheOQVYTR+v8njECMf b8HA== X-Gm-Message-State: AOAM533cXItaUUpEM0WjTuq5Q3JNdkAmJEDDIzblXH1swaxa+5ZXOklg w4pJzl7dnbIP551KZPsMD95AMw== X-Google-Smtp-Source: ABdhPJwMgXpUIgZ3CNM/dKFQroYWHpilibQOGQpLE/mgmpvBuB7hT+mGvH4kRQZS3oJY1s8Xi9S9Zg== X-Received: by 2002:a05:600c:ad6:: with SMTP id c22mr3502882wmr.114.1628257204149; Fri, 06 Aug 2021 06:40:04 -0700 (PDT) Received: from google.com ([2a00:79e0:d:210:41d5:61f3:25d7:c384]) by smtp.gmail.com with ESMTPSA id i14sm5642824wmq.40.2021.08.06.06.40.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Aug 2021 06:40:03 -0700 (PDT) Date: Fri, 6 Aug 2021 14:40:00 +0100 From: Quentin Perret To: Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kernel-team@android.com, Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org Subject: Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Message-ID: References: <20210806113109.2475-1-will@kernel.org> <20210806113109.2475-5-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210806113109.2475-5-will@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-arch@vger.kernel.org On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote: > From: Marc Zyngier > > The protected mode relies on a separate helper to load the > S2 context. Move over to the __load_guest_stage2() helper > instead. > > Cc: Catalin Marinas > Cc: Jade Alglave > Cc: Shameer Kolothum > Signed-off-by: Marc Zyngier > Signed-off-by: Will Deacon > --- > arch/arm64/include/asm/kvm_mmu.h | 11 +++-------- > arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- > arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 +- > 3 files changed, 5 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 05e089653a1a..934ef0deff9f 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) > * Must be called from hyp code running at EL2 with an updated VTTBR > * and interrupts disabled. > */ > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr) > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, > + struct kvm_arch *arch) > { > - write_sysreg(vtcr, vtcr_el2); > + write_sysreg(arch->vtcr, vtcr_el2); > write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); > > /* > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long > asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); > } > > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, > - struct kvm_arch *arch) > -{ > - __load_stage2(mmu, arch->vtcr); > -} > - > static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) > { > return container_of(mmu->arch, struct kvm, arch); > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > index 9c227d87c36d..a910648bc71b 100644 > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); > static __always_inline void __load_host_stage2(void) > { > if (static_branch_likely(&kvm_protected_mode_initialized)) > - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); > + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); > else > write_sysreg(0, vttbr_el2); > } > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > index d938ce95d3bd..d4e74ca7f876 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void) > kvm_flush_dcache_to_poc(params, sizeof(*params)); > > write_sysreg(params->hcr_el2, hcr_el2); > - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); > + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); Nit: clearly we're not loading a guest stage-2 here, so maybe the function should take a more generic name? Thanks, Quentin From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.7 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3077DC432BE for ; Fri, 6 Aug 2021 13:40:11 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id ABCAC61181 for ; Fri, 6 Aug 2021 13:40:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ABCAC61181 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 231D54B0F0; Fri, 6 Aug 2021 09:40:10 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id riPgKxcqdUhw; Fri, 6 Aug 2021 09:40:08 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E7FFA4B0EA; Fri, 6 Aug 2021 09:40:08 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 264AD4B0E5 for ; Fri, 6 Aug 2021 09:40:08 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ks99tt9pvQWF for ; Fri, 6 Aug 2021 09:40:05 -0400 (EDT) Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 540BD4B0E7 for ; Fri, 6 Aug 2021 09:40:05 -0400 (EDT) Received: by mail-wm1-f49.google.com with SMTP id d131-20020a1c1d890000b02902516717f562so6153444wmd.3 for ; Fri, 06 Aug 2021 06:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=fp2FsksUpuC70lwZPSSi0nDjJByoDoO4PpzbqKW0cT4=; b=icR9Int0QvvI0heodeFJD0pbyAXz7QjoptO/0cpBITayVgP+KEmgyE3q91jHhPzHVI lYJw8LFsckdwvdFPQMEtHdnm3Uf2LDM4NItv569axeYl+AjCA/t626QzX0lO2iplkG6N 3OaHzooQYDQ9QfMyjYg3BI9lRYZHyNtcOvtmx9sOx+krNFV2uVxVblsGpTqYFwh7JPB+ x3mc5O8XuPLPW3G8h8kPgmvmBwxG3WoV5/wuAYY+egCFkWssTPFoUnJX22FHO0pnWuVF 2SX/pJ8mI73ryLUNvPXmnC/EpD57Fp42Ag08haakgb2+h9WcVcpSATaZtkupBV0rDt4X /W0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=fp2FsksUpuC70lwZPSSi0nDjJByoDoO4PpzbqKW0cT4=; b=aWCc6wABwb7XLek+g0byEBpc5grfHhCSmewOLLn8SpPvKHLR8hNI5EviYXftX3jC7n GvYlTB3tzm0xCywupewHA9MKVW8Kv41jjufyzFmmTlViCizc6Aj92z8vsz8qbEteNDDo yMYm3bb6RDCHQi2nAeeWOHwDg9gkx70LOvE05olBRem0txOzxZgnJAaAR8q+IDDS6ylW 3iqJkDhH/5jn7PZAb+3Ch2JCyqWmmZQX8luJiJXAP68YBRpkbZo2ehK1ttBW28t7zcEx 5PhOxHItPDg8amD/rXoRhryxlGAwyN5kgSr2rjWaJRHT0tnRTv7CTD+HQAU4bfnBsb8h yoqA== X-Gm-Message-State: AOAM533vxstpduBQ9pU7NtSexh9+TsZEc52tivvKVI6K+87dFhif88ED cpPkXgMWLtvh/l6tkQJoNZIArQ== X-Google-Smtp-Source: ABdhPJwMgXpUIgZ3CNM/dKFQroYWHpilibQOGQpLE/mgmpvBuB7hT+mGvH4kRQZS3oJY1s8Xi9S9Zg== X-Received: by 2002:a05:600c:ad6:: with SMTP id c22mr3502882wmr.114.1628257204149; Fri, 06 Aug 2021 06:40:04 -0700 (PDT) Received: from google.com ([2a00:79e0:d:210:41d5:61f3:25d7:c384]) by smtp.gmail.com with ESMTPSA id i14sm5642824wmq.40.2021.08.06.06.40.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Aug 2021 06:40:03 -0700 (PDT) Date: Fri, 6 Aug 2021 14:40:00 +0100 From: Quentin Perret To: Will Deacon Subject: Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Message-ID: References: <20210806113109.2475-1-will@kernel.org> <20210806113109.2475-5-will@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210806113109.2475-5-will@kernel.org> Cc: linux-arch@vger.kernel.org, Catalin Marinas , Marc Zyngier , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Jade Alglave X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote: > From: Marc Zyngier > > The protected mode relies on a separate helper to load the > S2 context. Move over to the __load_guest_stage2() helper > instead. > > Cc: Catalin Marinas > Cc: Jade Alglave > Cc: Shameer Kolothum > Signed-off-by: Marc Zyngier > Signed-off-by: Will Deacon > --- > arch/arm64/include/asm/kvm_mmu.h | 11 +++-------- > arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- > arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 +- > 3 files changed, 5 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 05e089653a1a..934ef0deff9f 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) > * Must be called from hyp code running at EL2 with an updated VTTBR > * and interrupts disabled. > */ > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr) > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, > + struct kvm_arch *arch) > { > - write_sysreg(vtcr, vtcr_el2); > + write_sysreg(arch->vtcr, vtcr_el2); > write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); > > /* > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long > asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); > } > > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, > - struct kvm_arch *arch) > -{ > - __load_stage2(mmu, arch->vtcr); > -} > - > static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) > { > return container_of(mmu->arch, struct kvm, arch); > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > index 9c227d87c36d..a910648bc71b 100644 > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); > static __always_inline void __load_host_stage2(void) > { > if (static_branch_likely(&kvm_protected_mode_initialized)) > - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); > + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); > else > write_sysreg(0, vttbr_el2); > } > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > index d938ce95d3bd..d4e74ca7f876 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void) > kvm_flush_dcache_to_poc(params, sizeof(*params)); > > write_sysreg(params->hcr_el2, hcr_el2); > - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); > + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); Nit: clearly we're not loading a guest stage-2 here, so maybe the function should take a more generic name? Thanks, Quentin _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD550C4338F for ; Fri, 6 Aug 2021 13:41:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8350D60C41 for ; Fri, 6 Aug 2021 13:41:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8350D60C41 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=S08turN7Jnpx2IHwy7Yw8/DdpImddGPe0yH6f3LNwU8=; b=bQhsfkZRRKl8JK 6qEIdJSgqbNqbQQTqBeAKyq2k6rbftwJC/HLlii9EqT+Eu2IAQGFdU+m9DDVTrl0RotjOKkfcZ198 5OZUu2ZF4Iaf1Xsl/2nL31l74g+0HHNcQAz1k+++Q7GyxV4yp7VFz7cYbUcAYbgf74jhLh5Pnru/G l1x0Q515EA5rRhRK2nxMrT1Tw9WY2rdoHOJEBOmW7r8ax0N7bjEkjpWD0VOSZ4XOiphCSeiB9EqqU MG9+sEP1j16gCyovznmMpdmj7Ou9oOi5fonAx+4WUm/HO+BbnClA7g0/3S+hWrC70UcmO24dQhUok HN2awoX8hXGA4Rz5x9wQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mC057-00CcKP-Dl; Fri, 06 Aug 2021 13:40:13 +0000 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mC052-00CcHs-UE for linux-arm-kernel@lists.infradead.org; Fri, 06 Aug 2021 13:40:10 +0000 Received: by mail-wm1-x32e.google.com with SMTP id h24-20020a1ccc180000b029022e0571d1a0so6161460wmb.5 for ; Fri, 06 Aug 2021 06:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=fp2FsksUpuC70lwZPSSi0nDjJByoDoO4PpzbqKW0cT4=; b=icR9Int0QvvI0heodeFJD0pbyAXz7QjoptO/0cpBITayVgP+KEmgyE3q91jHhPzHVI lYJw8LFsckdwvdFPQMEtHdnm3Uf2LDM4NItv569axeYl+AjCA/t626QzX0lO2iplkG6N 3OaHzooQYDQ9QfMyjYg3BI9lRYZHyNtcOvtmx9sOx+krNFV2uVxVblsGpTqYFwh7JPB+ x3mc5O8XuPLPW3G8h8kPgmvmBwxG3WoV5/wuAYY+egCFkWssTPFoUnJX22FHO0pnWuVF 2SX/pJ8mI73ryLUNvPXmnC/EpD57Fp42Ag08haakgb2+h9WcVcpSATaZtkupBV0rDt4X /W0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=fp2FsksUpuC70lwZPSSi0nDjJByoDoO4PpzbqKW0cT4=; b=gjByCYpugc0gku+ARXHl5SSrU59igJsGp7cR5vSgIUgzNJ0aIAxDgHd1vCCT1l88D/ ENigAZxFZLSkp8ZcqiEQVjkh+dH5LwVJ70fzDRHB3p6y3fGYwd+b941xCOwD+gBhxXCR 1JCS995lE/WDMNebL5ES3w/1zZzoi2PvmutVTPqyecbxkCJBaATraDlFsbxAinpPsxPl Zp15Dqg4CJvNVoJ7O3WcDo1zRG5crG9plrLyed2EqyJxRWP/pzkbpL9yf93YgsiQ+PmG K1kzKlnGOCeTEgPdMD4zCP6ynIlg4WiIU3ZJ/zQA6V/paZSyFRNarMuuq+ImzyW7E43s z8sQ== X-Gm-Message-State: AOAM531H1L+IAUGFB3CZoznumCDeEPYI4J7/dGA92Ai50CWo6xDwC/BS Gtd5fVEjKpGXKZ0netyBFWQuSA== X-Google-Smtp-Source: ABdhPJwMgXpUIgZ3CNM/dKFQroYWHpilibQOGQpLE/mgmpvBuB7hT+mGvH4kRQZS3oJY1s8Xi9S9Zg== X-Received: by 2002:a05:600c:ad6:: with SMTP id c22mr3502882wmr.114.1628257204149; Fri, 06 Aug 2021 06:40:04 -0700 (PDT) Received: from google.com ([2a00:79e0:d:210:41d5:61f3:25d7:c384]) by smtp.gmail.com with ESMTPSA id i14sm5642824wmq.40.2021.08.06.06.40.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Aug 2021 06:40:03 -0700 (PDT) Date: Fri, 6 Aug 2021 14:40:00 +0100 From: Quentin Perret To: Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kernel-team@android.com, Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org Subject: Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Message-ID: References: <20210806113109.2475-1-will@kernel.org> <20210806113109.2475-5-will@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210806113109.2475-5-will@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210806_064009_047993_A9E71C0B X-CRM114-Status: GOOD ( 18.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote: > From: Marc Zyngier > > The protected mode relies on a separate helper to load the > S2 context. Move over to the __load_guest_stage2() helper > instead. > > Cc: Catalin Marinas > Cc: Jade Alglave > Cc: Shameer Kolothum > Signed-off-by: Marc Zyngier > Signed-off-by: Will Deacon > --- > arch/arm64/include/asm/kvm_mmu.h | 11 +++-------- > arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- > arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 +- > 3 files changed, 5 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 05e089653a1a..934ef0deff9f 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) > * Must be called from hyp code running at EL2 with an updated VTTBR > * and interrupts disabled. > */ > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr) > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, > + struct kvm_arch *arch) > { > - write_sysreg(vtcr, vtcr_el2); > + write_sysreg(arch->vtcr, vtcr_el2); > write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); > > /* > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long > asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); > } > > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, > - struct kvm_arch *arch) > -{ > - __load_stage2(mmu, arch->vtcr); > -} > - > static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) > { > return container_of(mmu->arch, struct kvm, arch); > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > index 9c227d87c36d..a910648bc71b 100644 > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); > static __always_inline void __load_host_stage2(void) > { > if (static_branch_likely(&kvm_protected_mode_initialized)) > - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); > + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); > else > write_sysreg(0, vttbr_el2); > } > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > index d938ce95d3bd..d4e74ca7f876 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void) > kvm_flush_dcache_to_poc(params, sizeof(*params)); > > write_sysreg(params->hcr_el2, hcr_el2); > - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); > + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); Nit: clearly we're not loading a guest stage-2 here, so maybe the function should take a more generic name? Thanks, Quentin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel