From: Chao Peng <chao.p.peng@linux.intel.com>
To: Isaku Yamahata <isaku.yamahata@gmail.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-api@vger.kernel.org, linux-doc@vger.kernel.org,
qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>,
Hugh Dickins <hughd@google.com>, Jeff Layton <jlayton@kernel.org>,
"J . Bruce Fields" <bfields@fieldses.org>,
Andrew Morton <akpm@linux-foundation.org>,
Shuah Khan <shuah@kernel.org>, Mike Rapoport <rppt@kernel.org>,
Steven Price <steven.price@arm.com>,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
Vlastimil Babka <vbabka@suse.cz>,
Vishal Annapurve <vannapurve@google.com>,
Yu Zhang <yu.c.zhang@linux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com,
ak@linux.intel.com, david@redhat.com, aarcange@redhat.com,
ddutile@redhat.com, dhildenb@redhat.com,
Quentin Perret <qperret@google.com>,
Michael Roth <michael.roth@amd.com>,
mhocko@suse.com, Muchun Song <songmuchun@bytedance.com>,
wei.w.wang@intel.com
Subject: Re: [PATCH v8 6/8] KVM: Update lpage info when private/shared memory are mixed
Date: Fri, 30 Sep 2022 16:59:14 +0800 [thread overview]
Message-ID: <20220930085914.GA2799703@chaop.bj.intel.com> (raw)
In-Reply-To: <20220929165206.GA1963093@ls.amr.corp.intel.com>
On Thu, Sep 29, 2022 at 09:52:06AM -0700, Isaku Yamahata wrote:
> On Thu, Sep 15, 2022 at 10:29:11PM +0800,
> Chao Peng <chao.p.peng@linux.intel.com> wrote:
>
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 08abad4f3e6f..a0f198cede3d 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> ...
> > @@ -6894,3 +6899,115 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
> > if (kvm->arch.nx_lpage_recovery_thread)
> > kthread_stop(kvm->arch.nx_lpage_recovery_thread);
> > }
> > +
> > +static bool mem_attr_is_mixed(struct kvm *kvm, unsigned int attr,
> > + gfn_t start, gfn_t end)
> > +{
> > + XA_STATE(xas, &kvm->mem_attr_array, start);
> > + gfn_t gfn = start;
> > + void *entry;
> > + bool shared, private;
> > + bool mixed = false;
> > +
> > + if (attr == KVM_MEM_ATTR_SHARED) {
> > + shared = true;
> > + private = false;
> > + } else {
> > + shared = false;
> > + private = true;
> > + }
>
> We don't have to care the target is shared or private. We need to check
> only same or not.
There is optimization chance if we know what we are going to set. we can
return 'mixed = true' earlier when we find the first reverse attr, e.g.
it's unnecessarily to check all the child page attr in one largepage to
give a conclusion.
After a further look, the code can be refined as below:
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -7255,17 +7255,9 @@ static bool mem_attr_is_mixed(struct kvm *kvm, unsigned int attr,
XA_STATE(xas, &kvm->mem_attr_array, start);
gfn_t gfn = start;
void *entry;
- bool shared, private;
+ bool shared = attr == KVM_MEM_ATTR_SHARED;
bool mixed = false;
- if (attr == KVM_MEM_ATTR_SHARED) {
- shared = true;
- private = false;
- } else {
- shared = false;
- private = true;
- }
-
rcu_read_lock();
entry = xas_load(&xas);
while (gfn < end) {
@@ -7274,12 +7266,7 @@ static bool mem_attr_is_mixed(struct kvm *kvm, unsigned int attr,
KVM_BUG_ON(gfn != xas.xa_index, kvm);
- if (entry)
- private = true;
- else
- shared = true;
-
- if (private && shared) {
+ if ((entry && !shared) || (!entry && shared)) {
mixed = true;
goto out;
}
@@ -7320,8 +7307,7 @@ static void update_mem_lpage_info(struct kvm *kvm,
* we know they are not mixed.
*/
update_mixed(lpage_info_slot(lpage_start, slot, level),
- mem_attr_is_mixed(kvm, attr, lpage_start,
- lpage_start + pages));
+ mem_attr_is_mixed(kvm, attr, lpage_start, start));
if (lpage_start == lpage_end)
return;
@@ -7330,7 +7316,7 @@ static void update_mem_lpage_info(struct kvm *kvm,
update_mixed(lpage_info_slot(gfn, slot, level), false);
update_mixed(lpage_info_slot(lpage_end, slot, level),
- mem_attr_is_mixed(kvm, attr, lpage_end,
+ mem_attr_is_mixed(kvm, attr, end,
lpage_end + pages));
}
}
>
> > +
> > + rcu_read_lock();
> > + entry = xas_load(&xas);
> > + while (gfn < end) {
> > + if (xas_retry(&xas, entry))
> > + continue;
> > +
> > + KVM_BUG_ON(gfn != xas.xa_index, kvm);
> > +
> > + if (entry)
> > + private = true;
> > + else
> > + shared = true;
> > +
> > + if (private && shared) {
> > + mixed = true;
> > + goto out;
> > + }
> > +
> > + entry = xas_next(&xas);
> > + gfn++;
> > + }
> > +out:
> > + rcu_read_unlock();
> > + return mixed;
> > +}
> > +
> > +static inline void update_mixed(struct kvm_lpage_info *linfo, bool mixed)
> > +{
> > + if (mixed)
> > + linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED;
> > + else
> > + linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED;
> > +}
> > +
> > +static void update_mem_lpage_info(struct kvm *kvm,
> > + struct kvm_memory_slot *slot,
> > + unsigned int attr,
> > + gfn_t start, gfn_t end)
> > +{
> > + unsigned long lpage_start, lpage_end;
> > + unsigned long gfn, pages, mask;
> > + int level;
> > +
> > + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> > + pages = KVM_PAGES_PER_HPAGE(level);
> > + mask = ~(pages - 1);
> > + lpage_start = start & mask;
> > + lpage_end = (end - 1) & mask;
> > +
> > + /*
> > + * We only need to scan the head and tail page, for middle pages
> > + * we know they are not mixed.
> > + */
> > + update_mixed(lpage_info_slot(lpage_start, slot, level),
> > + mem_attr_is_mixed(kvm, attr, lpage_start,
> > + lpage_start + pages));
> > +
> > + if (lpage_start == lpage_end)
> > + return;
> > +
> > + for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages)
> > + update_mixed(lpage_info_slot(gfn, slot, level), false);
>
>
> For >2M case, we don't have to check all entry. just check lower level case.
Sounds good, we can reduce some scanning.
Thanks,
Chao
>
> > +
> > + update_mixed(lpage_info_slot(lpage_end, slot, level),
> > + mem_attr_is_mixed(kvm, attr, lpage_end,
> > + lpage_end + pages));
> > + }
> > +}
> > +
> > +void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr,
> > + gfn_t start, gfn_t end)
> > +{
> > + struct kvm_memory_slot *slot;
> > + struct kvm_memslots *slots;
> > + struct kvm_memslot_iter iter;
> > + int i;
> > +
> > + WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)),
> > + "Unsupported mem attribute.\n");
> > +
> > + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> > + slots = __kvm_memslots(kvm, i);
> > +
> > + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) {
> > + slot = iter.slot;
> > + start = max(start, slot->base_gfn);
> > + end = min(end, slot->base_gfn + slot->npages);
> > + if (WARN_ON_ONCE(start >= end))
> > + continue;
> > +
> > + update_mem_lpage_info(kvm, slot, attr, start, end);
> > + }
> > + }
> > +}
>
>
> Here is my updated version.
>
> bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level)
> {
> gfn_t pages = KVM_PAGES_PER_HPAGE(level);
> gfn_t mask = ~(pages - 1);
> struct kvm_lpage_info *linfo = lpage_info_slot(gfn & mask, slot, level);
>
> WARN_ON_ONCE(level == PG_LEVEL_4K);
> return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED;
> }
>
> #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_ATTR
> static void update_mixed(struct kvm_lpage_info *linfo, bool mixed)
> {
> if (mixed)
> linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED;
> else
> linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED;
> }
>
> static bool __mem_attr_is_mixed(struct kvm *kvm, gfn_t start, gfn_t end)
> {
> XA_STATE(xas, &kvm->mem_attr_array, start);
> bool mixed = false;
> gfn_t gfn = start;
> void *s_entry;
> void *entry;
>
> rcu_read_lock();
> s_entry = xas_load(&xas);
> entry = s_entry;
> while (gfn < end) {
> if (xas_retry(&xas, entry))
> continue;
>
> KVM_BUG_ON(gfn != xas.xa_index, kvm);
>
> entry = xas_next(&xas);
> if (entry != s_entry) {
> mixed = true;
> break;
> }
> gfn++;
> }
> rcu_read_unlock();
> return mixed;
> }
>
> static bool mem_attr_is_mixed(struct kvm *kvm,
> struct kvm_memory_slot *slot, int level,
> gfn_t start, gfn_t end)
> {
> struct kvm_lpage_info *child_linfo;
> unsigned long child_pages;
> bool mixed = false;
> unsigned long gfn;
> void *entry;
>
> if (WARN_ON_ONCE(level == PG_LEVEL_4K))
> return false;
>
> if (level == PG_LEVEL_2M)
> return __mem_attr_is_mixed(kvm, start, end);
>
> /* This assumes that level - 1 is already updated. */
> rcu_read_lock();
> child_pages = KVM_PAGES_PER_HPAGE(level - 1);
> entry = xa_load(&kvm->mem_attr_array, start);
> for (gfn = start; gfn < end; gfn += child_pages) {
> child_linfo = lpage_info_slot(gfn, slot, level - 1);
> if (child_linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED) {
> mixed = true;
> break;
> }
> if (xa_load(&kvm->mem_attr_array, gfn) != entry) {
> mixed = true;
> break;
> }
> }
> rcu_read_unlock();
> return mixed;
> }
>
> static void update_mem_lpage_info(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> unsigned int attr,
> gfn_t start, gfn_t end)
> {
> unsigned long lpage_start, lpage_end;
> unsigned long gfn, pages, mask;
> int level;
>
> for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> pages = KVM_PAGES_PER_HPAGE(level);
> mask = ~(pages - 1);
> lpage_start = start & mask;
> lpage_end = (end - 1) & mask;
>
> /*
> * We only need to scan the head and tail page, for middle pages
> * we know they are not mixed.
> */
> update_mixed(lpage_info_slot(lpage_start, slot, level),
> mem_attr_is_mixed(kvm, slot, level,
> lpage_start, lpage_start + pages));
>
> if (lpage_start == lpage_end)
> return;
>
> for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages)
> update_mixed(lpage_info_slot(gfn, slot, level), false);
>
> update_mixed(lpage_info_slot(lpage_end, slot, level),
> mem_attr_is_mixed(kvm, slot, level,
> lpage_end, lpage_end + pages));
> }
> }
>
> void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr,
> gfn_t start, gfn_t end)
> {
> struct kvm_memory_slot *slot;
> struct kvm_memslots *slots;
> struct kvm_memslot_iter iter;
> int idx;
> int i;
>
> WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)),
> "Unsupported mem attribute.\n");
>
> idx = srcu_read_lock(&kvm->srcu);
> for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> slots = __kvm_memslots(kvm, i);
>
> kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) {
> slot = iter.slot;
> start = max(start, slot->base_gfn);
> end = min(end, slot->base_gfn + slot->npages);
> if (WARN_ON_ONCE(start >= end))
> continue;
>
> update_mem_lpage_info(kvm, slot, attr, start, end);
> }
> }
> srcu_read_unlock(&kvm->srcu, idx);
> }
> #endif
>
>
> --
> Isaku Yamahata <isaku.yamahata@gmail.com>
next prev parent reply other threads:[~2022-09-30 9:04 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-15 14:29 [PATCH v8 0/8] KVM: mm: fd-based approach for supporting KVM Chao Peng
2022-09-15 14:29 ` [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Chao Peng
2022-09-19 9:12 ` David Hildenbrand
2022-09-19 19:10 ` Sean Christopherson
2022-09-21 21:10 ` Andy Lutomirski
2022-09-22 13:23 ` Wang, Wei W
2022-09-23 15:20 ` Fuad Tabba
2022-09-23 15:19 ` Fuad Tabba
2022-09-26 14:23 ` Chao Peng
2022-09-26 15:51 ` Fuad Tabba
2022-09-27 22:47 ` Sean Christopherson
2022-09-30 16:19 ` Fuad Tabba
2022-10-13 13:34 ` Chao Peng
2022-10-17 10:31 ` Fuad Tabba
2022-10-17 14:58 ` Chao Peng
2022-10-17 19:05 ` Fuad Tabba
2022-10-19 13:30 ` Chao Peng
2022-10-18 0:33 ` Sean Christopherson
2022-10-19 15:04 ` Fuad Tabba
2022-09-23 0:58 ` Kirill A . Shutemov
2022-09-26 10:35 ` David Hildenbrand
2022-09-26 14:48 ` Kirill A. Shutemov
2022-09-26 14:53 ` David Hildenbrand
2022-09-27 23:23 ` Sean Christopherson
2022-09-28 13:36 ` Kirill A. Shutemov
2022-09-22 13:26 ` Wang, Wei W
2022-09-22 19:49 ` Sean Christopherson
2022-09-23 0:53 ` Kirill A . Shutemov
2022-09-23 15:20 ` Fuad Tabba
2022-09-30 16:14 ` Fuad Tabba
2022-09-30 16:23 ` Kirill A . Shutemov
2022-10-03 7:33 ` Fuad Tabba
2022-10-03 11:01 ` Kirill A. Shutemov
2022-10-04 15:39 ` Fuad Tabba
2022-10-06 8:50 ` Fuad Tabba
2022-10-06 13:04 ` Kirill A. Shutemov
2022-10-17 13:00 ` Vlastimil Babka
2022-10-17 16:19 ` Kirill A . Shutemov
2022-10-17 16:39 ` Gupta, Pankaj
2022-10-17 21:56 ` Kirill A . Shutemov
2022-10-18 13:42 ` Vishal Annapurve
2022-10-19 15:32 ` Kirill A . Shutemov
2022-10-20 10:50 ` Vishal Annapurve
2022-10-21 13:54 ` Chao Peng
2022-10-21 16:53 ` Sean Christopherson
2022-10-19 12:23 ` Vishal Annapurve
2022-10-21 13:47 ` Chao Peng
2022-10-21 16:18 ` Sean Christopherson
2022-10-24 14:59 ` Kirill A . Shutemov
2022-10-24 15:26 ` David Hildenbrand
2022-11-03 16:27 ` Vishal Annapurve
2022-09-15 14:29 ` [PATCH v8 2/8] KVM: Extend the memslot to support fd-based private memory Chao Peng
2022-09-16 9:14 ` Bagas Sanjaya
2022-09-16 9:53 ` Chao Peng
2022-09-26 10:26 ` Fuad Tabba
2022-09-26 14:04 ` Chao Peng
2022-09-29 22:45 ` Isaku Yamahata
2022-09-29 23:22 ` Sean Christopherson
2022-10-05 13:04 ` Jarkko Sakkinen
2022-10-05 22:05 ` Jarkko Sakkinen
2022-10-06 9:00 ` Fuad Tabba
2022-10-06 14:58 ` Jarkko Sakkinen
2022-10-06 15:07 ` Jarkko Sakkinen
2022-10-06 15:34 ` Sean Christopherson
2022-10-07 11:14 ` Jarkko Sakkinen
2022-10-07 14:58 ` Sean Christopherson
2022-10-07 21:54 ` Jarkko Sakkinen
2022-10-08 16:15 ` Jarkko Sakkinen
2022-10-08 17:35 ` Jarkko Sakkinen
2022-10-10 8:25 ` Chao Peng
2022-10-12 8:14 ` Jarkko Sakkinen
2022-09-15 14:29 ` [PATCH v8 3/8] KVM: Add KVM_EXIT_MEMORY_FAULT exit Chao Peng
2022-09-16 9:17 ` Bagas Sanjaya
2022-09-16 9:54 ` Chao Peng
2022-09-15 14:29 ` [PATCH v8 4/8] KVM: Use gfn instead of hva for mmu_notifier_retry Chao Peng
2022-09-15 14:29 ` [PATCH v8 5/8] KVM: Register/unregister the guest private memory regions Chao Peng
2022-09-26 10:36 ` Fuad Tabba
2022-09-26 14:07 ` Chao Peng
2022-10-11 9:48 ` Fuad Tabba
2022-10-12 2:35 ` Chao Peng
2022-10-17 10:15 ` Fuad Tabba
2022-10-17 22:17 ` Sean Christopherson
2022-10-19 13:23 ` Chao Peng
2022-10-19 15:02 ` Fuad Tabba
2022-10-19 16:09 ` Sean Christopherson
2022-10-19 18:32 ` Fuad Tabba
2022-09-15 14:29 ` [PATCH v8 6/8] KVM: Update lpage info when private/shared memory are mixed Chao Peng
2022-09-29 16:52 ` Isaku Yamahata
2022-09-30 8:59 ` Chao Peng [this message]
2022-09-15 14:29 ` [PATCH v8 7/8] KVM: Handle page fault for private memory Chao Peng
2022-10-14 18:57 ` Sean Christopherson
2022-10-17 14:48 ` Chao Peng
2022-09-15 14:29 ` [PATCH v8 8/8] KVM: Enable and expose KVM_MEM_PRIVATE Chao Peng
2022-10-04 14:55 ` Jarkko Sakkinen
2022-10-10 8:31 ` Chao Peng
2022-10-06 8:55 ` Fuad Tabba
2022-10-10 8:33 ` Chao Peng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220930085914.GA2799703@chaop.bj.intel.com \
--to=chao.p.peng@linux.intel.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=bfields@fieldses.org \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=ddutile@redhat.com \
--cc=dhildenb@redhat.com \
--cc=hpa@zytor.com \
--cc=hughd@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=jlayton@kernel.org \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=jun.nakajima@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mail@maciej.szmigiero.name \
--cc=mhocko@suse.com \
--cc=michael.roth@amd.com \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qperret@google.com \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=songmuchun@bytedance.com \
--cc=steven.price@arm.com \
--cc=tglx@linutronix.de \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
--cc=wei.w.wang@intel.com \
--cc=x86@kernel.org \
--cc=yu.c.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).