From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72499C4338F for ; Tue, 3 Aug 2021 15:57:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 543A260F45 for ; Tue, 3 Aug 2021 15:57:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237197AbhHCP5f (ORCPT ); Tue, 3 Aug 2021 11:57:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237113AbhHCP5e (ORCPT ); Tue, 3 Aug 2021 11:57:34 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F365EC061764 for ; Tue, 3 Aug 2021 08:57:22 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id u16so15725038ple.2 for ; Tue, 03 Aug 2021 08:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=PUbLeF6P+RfV2ZMxgILEyWJ1iWI9JJBoF3+HQdIEp0o=; b=uEOqguzLfN7sxpk0hpNOYoi9naF81pEJsvlwL2+Hzf9X4jF3wJu7wk9ZTlzyX6qWI+ qfV6C+nj8EL/HvTpF0FgN6WzgJiduGzthDy5w5YBvhVLdmtdzgfDWteklLMP5gukp2TV gWwhSTtlL1ZjU8HvVso6A+ozmA+KQ74GVJQg3y7FjwhjMzdkoiVcBQhcCas1PTRL/Cs1 u8gUEdYB4L+Eg86K6hrdHnu8w8rY/sFkr3CyghJVfGH35hAMG85XbJk0lSlIB+AVFR4A bJIREME8V/XhMCh/mWv9IMenXWihHo2scEnXQGND/ztC/fzn+R9IWsCQSALCrBnf0x5v 7wVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=PUbLeF6P+RfV2ZMxgILEyWJ1iWI9JJBoF3+HQdIEp0o=; b=sd5DBaRN4wbjmHy3/OLGKWOLjw/PqZCI16vBjebxpcGQeNyTDnvOC1KAUa3kp+XqMH 9Zxrx1KWtctufSuBvq6z67NhrQaxwKp2oYvy71FkyBZEfnF7W2+LUHg4bNBJPXNHPlXS zsOU8vukAPppyBDBVHu7a/ffSg0k83+6kJA9spXavWEi6YW0QaItiH0tmfmw35MlugBS NPlMQn8CkVR80dFvqutf8JRAoJIas3SN055XuwYM3b9SiZ4eBEJoaKYZJ/Z13v1vmWU8 Ygxng1OZyc/65GdZ1+8XcZv1a80xmDs18nD43LGW/3qQSf/R/WsGKzFJ1YvErqdkq8u4 tU3Q== X-Gm-Message-State: AOAM531JHEIBvfF2YP3AU5aD5gYkj54f4x3pmxkqG/xOeIRKZQZ+KUP5 4VWoMeeJ2PL9aVOEIXiwy0mIJQ== X-Google-Smtp-Source: ABdhPJyCNNlnYJSNBk0rXf5Z1I+OpS/E5ZuAiTayefhIQ6JoKww4ArLbVfGJBQvXyAavllg6CDngrg== X-Received: by 2002:aa7:9470:0:b029:3c4:d63d:38bf with SMTP id t16-20020aa794700000b02903c4d63d38bfmr371218pfq.24.1628006242332; Tue, 03 Aug 2021 08:57:22 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id b7sm15773399pfl.195.2021.08.03.08.57.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Aug 2021 08:57:21 -0700 (PDT) Date: Tue, 3 Aug 2021 15:57:18 +0000 From: Sean Christopherson To: Lai Jiangshan Cc: LKML , kvm@vger.kernel.org, Paolo Bonzini , Lai Jiangshan , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson Subject: Re: [RFC PATCH] kvm/x86: Keep root hpa in prev_roots as much as possible Message-ID: References: <20210525213920.3340-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 03, 2021, Lai Jiangshan wrote: > On Fri, Jul 30, 2021 at 2:06 AM Sean Christopherson wrote: > > Ha, we can do this without increasing the memory footprint and without co-opting > > a bit from pgd or hpa. Because of compiler alignment/padding, the u8s and bools > > between mmu_role and prev_roots already occupy 8 bytes, even though the actual > > size is 4 bytes. In total, we need room for 4 roots (3 previous + current), i.e. > > 4 bytes. If a separate array is used, no additional memory is consumed and no > > masking is needed when reading/writing e.g. pgd. > > > > The cost is an extra swap() when updating the prev_roots LRU, but that's peanuts > > and would likely be offset by masking anyways. > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index 99f37781a6fc..13bb3c3a60b4 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -424,10 +424,12 @@ struct kvm_mmu { > > hpa_t root_hpa; > > gpa_t root_pgd; > > union kvm_mmu_role mmu_role; > > + bool root_unsync; > > u8 root_level; > > u8 shadow_root_level; > > u8 ept_ad; > > bool direct_map; > > + bool unsync_roots[KVM_MMU_NUM_PREV_ROOTS]; > > struct kvm_mmu_root_info prev_roots[KVM_MMU_NUM_PREV_ROOTS]; > > > > Hello > > I think it is too complicated. And it is hard to accept to put "unsync" > out of struct kvm_mmu_root_info when they should be bound to each other. I agree it's a bit ugly to have the separate unsync_roots array, but I don't see how it's any more complex. It's literally a single swap() call. > How about this: > - KVM_MMU_NUM_PREV_ROOTS > + KVM_MMU_NUM_CACHED_ROOTS > - mmu->prev_roots[KVM_MMU_NUM_PREV_ROOTS] > + mmu->cached_roots[KVM_MMU_NUM_CACHED_ROOTS] I don't have a strong preference on PREV vs CACHED. CACHED is probably more intuitive, but KVM isn't truly caching the root, it's just tracking the HPA (and PGD for indirect MMUs), e.g. the root may no longer exist if the backing shadow page was zapped. On the other hand, the main helper is cached_root_available()... > - mmu->root_hpa > + mmu->cached_roots[0].hpa > - mmu->root_pgd > + mmu->cached_roots[0].pgd > > And using the bit63 in @pgd as the information that it is not requested FWIW, using bit 0 will likely generate more efficient code. > to sync since the last sync. Again, I don't have a super strong preference. I don't hate or love either one :-) Vitaly, Paolo, any preferences on names and approaches for tracking if a "cached" root is unsync?