From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77246C54EBD for ; Tue, 3 Jan 2023 23:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233670AbjACXGp (ORCPT ); Tue, 3 Jan 2023 18:06:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233712AbjACXGn (ORCPT ); Tue, 3 Jan 2023 18:06:43 -0500 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F35B1140FC for ; Tue, 3 Jan 2023 15:06:41 -0800 (PST) Received: by mail-pl1-x62b.google.com with SMTP id c6so1000327pls.4 for ; Tue, 03 Jan 2023 15:06:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=wms9vJRG39Hnpaj8L9AdTUEiKoOujl8Zc1PlfJpuAvc=; b=h9725tfa5hdVaq1HRKMg38pT3bnSdsZu1gYJxnR5d5+gbg52PuYQIv5wBFlDJhO9bS mI84Q+hhI20dHQcSkgxSkQf4wP4qmQP1LNIFkAq/VomPx4jdE8OPYacqy/eeBE+fylf/ swL/aH5bYQ07AmejSAYJHcZh07QDgP5xrRt6bm8Q1etUwfrKAFos+bdLOE5+JE6PBi3U sFDQ9Dps9Lxs/yyXs3jQijpfnjXJqK66rKL3UJSPZwrJDKJketPNn4qX/18FtXZzHbno RPLBoKWopziiN34rcQN0qmYINuJwWj5jMTrSB3yTnXpqqbcBJ6htLtlQbhw4piMR+QeH MFPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=wms9vJRG39Hnpaj8L9AdTUEiKoOujl8Zc1PlfJpuAvc=; b=Lq4DlKZmDH7+3h2F8YxeEuxmkXVn8xntCQBbJu7iwaysLKZ5ylEZWMrwuI7tPLoH3k L91l6Z/8zHXfX6zfhOEyJ2m9hnyT/FeBJ4vJ+tl7BHcvziC/YrULgUdaR8xuzehv3AtH bgGA2b8BAJPhWDvJKNQWnkuzY1eqtzFaxSqMKAZaQBAecsXH+5xeJKNcm3rUwbmyoBuc 5oPlTPPslBvVdFgCcOvH02tV4YB3mtBpzT/nFSqwyl8QIcVqgLTofC6ubA7zQ59m5HT1 6+AqPDx4GaJvho/if24YXPGxmiWFIf7NDJrCwysWQKJlqgGFUUZ3iG5AnX3ZhXlLvnxB QEJQ== X-Gm-Message-State: AFqh2krn+tXk7CyZPc2Te4yL0QjX6h3FvIAWoD1UBc+OMXEOvK35NvLi WOJe/gYjP25Yuc2phEltUGTb8A== X-Google-Smtp-Source: AMrXdXv0hPqADj10or2SDVYag+quux5jra8KVSiEDlbg50PLMvdP+6Y7KrZdfBMvhypOkMi3Sz3H1g== X-Received: by 2002:a05:6a20:2a9f:b0:a4:efde:2ed8 with SMTP id v31-20020a056a202a9f00b000a4efde2ed8mr5044243pzh.0.1672787201272; Tue, 03 Jan 2023 15:06:41 -0800 (PST) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id b27-20020aa7951b000000b00580c8a15d13sm19479380pfp.11.2023.01.03.15.06.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 15:06:40 -0800 (PST) Date: Tue, 3 Jan 2023 23:06:37 +0000 From: Sean Christopherson To: "Wang, Wei W" Cc: Chao Peng , "Qiang, Chenyi" , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" , "linux-arch@vger.kernel.org" , "linux-api@vger.kernel.org" , "linux-doc@vger.kernel.org" , "qemu-devel@nongnu.org" , Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , "x86@kernel.org" , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , "Lutomirski, Andy" , "Nakajima, Jun" , "Hansen, Dave" , "ak@linux.intel.com" , "david@redhat.com" , "aarcange@redhat.com" , "ddutile@redhat.com" , "dhildenb@redhat.com" , Quentin Perret , "tabba@google.com" , Michael Roth , "Hocko, Michal" Subject: Re: [PATCH v10 2/9] KVM: Introduce per-page memory attributes Message-ID: References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> <20221202061347.1070246-3-chao.p.peng@linux.intel.com> <1c9bbaa5-eea3-351e-d6a0-cfbc32115c82@intel.com> <20230103013948.GA2178318@chaop.bj.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Tue, Jan 03, 2023, Wang, Wei W wrote: > On Tuesday, January 3, 2023 9:40 AM, Chao Peng wrote: > > > Because guest memory defaults to private, and now this patch stores > > > the attributes with KVM_MEMORY_ATTRIBUTE_PRIVATE instead of > > _SHARED, > > > it would bring more KVM_EXIT_MEMORY_FAULT exits at the beginning of > > > boot time. Maybe it can be optimized somehow in other places? e.g. set > > > mem attr in advance. > > > > KVM defaults to 'shared' because this ioctl can also be potentially used by > > normal VMs and 'shared' sounds a value meaningful for both normal VMs and > > confidential VMs. > > Do you mean a normal VM could have pages marked private? What's the usage? > (If all the pages are just marked shared for normal VMs, then why do we need it) No, there are potential use cases for per-page attribute/permissions, e.g. to make select pages read-only, exec-only, no-exec, etc... > > As for more KVM_EXIT_MEMORY_FAULT exits during the > > booting time, yes, setting all memory to 'private' for confidential VMs through > > this ioctl in userspace before guest launch is an approach for KVM userspace to > > 'override' the KVM default and reduce the number of implicit conversions. > > Most pages of a confidential VM are likely to be private pages. It seems more efficient > (and not difficult to check vm_type) to have KVM defaults to "private" for confidential VMs > and defaults to "shared" for normal VMs. If done right, the default shouldn't matter all that much for efficiency. KVM needs to be able to effeciently track large ranges regardless of the default, otherwise the memory overhead and the presumably cost of lookups will be painful. E.g. converting a 1GiB chunk to shared should ideally require one entry, not 256k entries. Looks like that behavior was changed in v8 in response to feedback[*] that doing xa_store_range() on a subset of an existing range (entry) would overwrite the entire existing range (entry), not just the smaller subset. xa_store_range() does appear to be too simplistic for this use case, but looking at __filemap_add_folio(), splitting an existing entry isn't super complex. Using xa_store() for the very initial implementation is ok, and probably a good idea since it's more obviously correct and will give us a bisection point. But we definitely want a more performant implementation sooner than later. The hardest part will likely be merging existing entries, but that can be done separately too, and is probably lower priority. E.g. (1) use xa_store() and always track at 4KiB granularity, (2) support storing metadata in multi-index entries, and finally (3) support merging adjacent entries with identical values. [*] https://lore.kernel.org/all/CAGtprH9xyw6bt4=RBWF6-v2CSpabOCpKq5rPz+e-9co7EisoVQ@mail.gmail.com