From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A78DC7EE2C for ; Sat, 27 May 2023 18:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230044AbjE0SIx (ORCPT ); Sat, 27 May 2023 14:08:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229516AbjE0SIv (ORCPT ); Sat, 27 May 2023 14:08:51 -0400 Received: from out-39.mta0.migadu.com (out-39.mta0.migadu.com [91.218.175.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C780D2 for ; Sat, 27 May 2023 11:08:48 -0700 (PDT) Date: Sat, 27 May 2023 18:08:36 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1685210926; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=qro1ly8D0IN85bz01iVra27g5btF8pv3UlKO41a9+sM=; b=llE6IRkx/n91QHSJfUuz8Lvf36Zsdvbsda8s4+YDAphela0hHMrM+udZgnGOhdOMmdcMem 6eMFZGQIhSrfmCXeAFxKmM8bvTDiACcfidzZol+st9/hDl3h5IknZGYlF6wWBlLroxmus0 h5pa83O4YLhVJG62l7K1d5yuFK61noU= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: Yu Zhao Cc: Andrew Morton , Paolo Bonzini , Alistair Popple , Anup Patel , Ben Gardon , Borislav Petkov , Catalin Marinas , Chao Peng , Christophe Leroy , Dave Hansen , Fabiano Rosas , Gaosheng Cui , Gavin Shan , "H. Peter Anvin" , Ingo Molnar , James Morse , "Jason A. Donenfeld" , Jason Gunthorpe , Jonathan Corbet , Marc Zyngier , Masami Hiramatsu , Michael Ellerman , Michael Larabel , Mike Rapoport , Nicholas Piggin , Paul Mackerras , Peter Xu , Sean Christopherson , Steven Rostedt , Suzuki K Poulose , Thomas Gleixner , Thomas Huth , Will Deacon , Zenghui Yu , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, x86@kernel.org, linux-mm@google.com Subject: Re: [PATCH mm-unstable v2 04/10] kvm/arm64: make stage2 page tables RCU safe Message-ID: References: <20230526234435.662652-1-yuzhao@google.com> <20230526234435.662652-5-yuzhao@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230526234435.662652-5-yuzhao@google.com> X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Yu, On Fri, May 26, 2023 at 05:44:29PM -0600, Yu Zhao wrote: > Stage2 page tables are currently not RCU safe against unmapping or VM > destruction. The previous mmu_notifier_ops members rely on > kvm->mmu_lock to synchronize with those operations. > > However, the new mmu_notifier_ops member test_clear_young() provides > a fast path that does not take kvm->mmu_lock. To implement > kvm_arch_test_clear_young() for that path, unmapped page tables need > to be freed by RCU and kvm_free_stage2_pgd() needs to be after > mmu_notifier_unregister(). > > Remapping, specifically stage2_free_removed_table(), is already RCU > safe. > > Signed-off-by: Yu Zhao > --- > arch/arm64/include/asm/kvm_pgtable.h | 2 ++ > arch/arm64/kvm/arm.c | 1 + > arch/arm64/kvm/hyp/pgtable.c | 8 ++++++-- > arch/arm64/kvm/mmu.c | 17 ++++++++++++++++- > 4 files changed, 25 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index ff520598b62c..5cab52e3a35f 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -153,6 +153,7 @@ static inline bool kvm_level_supports_block_mapping(u32 level) > * @put_page: Decrement the refcount on a page. When the > * refcount reaches 0 the page is automatically > * freed. > + * @put_page_rcu: RCU variant of the above. You don't need to add yet another hook to implement this. I was working on lock-free walks in a separate context and arrived at the following: commit f82d264a37745e07ee28e116c336f139f681fd7f Author: Oliver Upton Date: Mon May 1 08:53:37 2023 +0000 KVM: arm64: Consistently use free_removed_table() for stage-2 free_removed_table() is essential to the RCU-protected parallel walking scheme, as behind the scenes the cleanup is deferred until an RCU grace period. Nonetheless, the stage-2 unmap path calls put_page() directly, which leads to table memory being freed inline with the table walk. This is safe for the time being, as the stage-2 unmap walker is called while holding the write lock. A future change to KVM will further relax the locking mechanics around the stage-2 page tables to allow lock-free walkers protected only by RCU. As such, switch to the RCU-safe mechanism for freeing table memory. Signed-off-by: Oliver Upton diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d..bfbebdcb4ef0 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1019,7 +1019,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, kvm_granule_size(ctx->level)); if (childp) - mm_ops->put_page(childp); + mm_ops->free_removed_table(childp, ctx->level); return 0; } -- Thanks, Oliver