All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Anup Patel <anup@brainfault.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Andrew Jones <drjones@redhat.com>,
	Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
	"Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" 
	<kvmarm@lists.cs.columbia.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" 
	<linux-mips@vger.kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" 
	<kvm@vger.kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" 
	<kvm-riscv@lists.infradead.org>,
	Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page()
Date: Mon, 9 May 2022 22:56:57 +0000	[thread overview]
Message-ID: <YnmcOdZILo2LqhAW@google.com> (raw)
In-Reply-To: <CALzav=de1=euis3WocTNBi+xNn1Ypo-GRROQNqmtAKk6q1NUqg@mail.gmail.com>

On Mon, May 09, 2022, David Matlack wrote:
> On Thu, May 5, 2022 at 4:33 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Fri, Apr 22, 2022, David Matlack wrote:
> > > Allow the vcpu pointer in __kvm_mmu_get_shadow_page() to be NULL. Rename
> > > it to vcpu_or_null to prevent future commits from accidentally taking
> > > dependency on it without first considering the NULL case.
> > >
> > > The vcpu pointer is only used for syncing indirect shadow pages in
> > > kvm_mmu_find_shadow_page(). A vcpu pointer it not required for
> > > correctness since unsync pages can simply be zapped. But this should
> > > never occur in practice, since the only use-case for passing a NULL vCPU
> > > pointer is eager page splitting which will only request direct shadow
> > > pages (which can never be unsync).
> > >
> > > Even though __kvm_mmu_get_shadow_page() can gracefully handle a NULL
> > > vcpu, add a WARN() that will fire if __kvm_mmu_get_shadow_page() is ever
> > > called to get an indirect shadow page with a NULL vCPU pointer, since
> > > zapping unsync SPs is a performance overhead that should be considered.
> > >
> > > Signed-off-by: David Matlack <dmatlack@google.com>
> > > ---
> > >  arch/x86/kvm/mmu/mmu.c | 40 ++++++++++++++++++++++++++++++++--------
> > >  1 file changed, 32 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 04029c01aebd..21407bd4435a 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -1845,16 +1845,27 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
> > >         &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)])     \
> > >               if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else
> > >
> > > -static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> > > -                      struct list_head *invalid_list)
> > > +static int __kvm_sync_page(struct kvm *kvm, struct kvm_vcpu *vcpu_or_null,
> > > +                        struct kvm_mmu_page *sp,
> > > +                        struct list_head *invalid_list)
> > >  {
> > > -     int ret = vcpu->arch.mmu->sync_page(vcpu, sp);
> > > +     int ret = -1;
> > > +
> > > +     if (vcpu_or_null)
> >
> > This should never happen.  I like the idea of warning early, but I really don't
> > like that the WARN is far removed from the code that actually depends on @vcpu
> > being non-NULL. Case in point, KVM should have bailed on the WARN and never
> > reached this point.  And the inner __kvm_sync_page() is completely unnecessary.
> 
> Yeah that's fair.
> 
> >
> > I also don't love the vcpu_or_null terminology; I get the intent, but it doesn't
> > really help because understand why/when it's NULL.
> 
> Eh, I don't think it needs to encode why or when. It just needs to
> flag to the reader (and future code authors) that this vcpu pointer
> (unlike all other vcpu pointers in KVM) is NULL in certain cases.

My objection is that without the why/when, developers that aren't familiar with
this code won't know the rules for using vcpu_or_null.  E.g. I don't want to end
up with

	if (vcpu_or_null)
		do x;
	else
		do y;

because inevitably it'll become unclear whether or not that code is actually _correct_.
It might not #GP on a NULL pointer, but it doesn't mean it's correct.

> > I played around with casting, e.g. to/from an unsigned long or void *, to prevent
> > usage, but that doesn't work very well because 'unsigned long' ends up being
> > awkward/confusing, and 'void *' is easily lost on a function call.  And both
> > lose type safety :-(
> 
> Yet another shortcoming of C :(

And lack of closures, which would work very well here.

> (The other being our other discussion about the RET_PF* return codes
> getting easily misinterpreted as KVM's magic return-to-user /
> continue-running-guest return codes.)
> 
> Makes me miss Rust!
> 
> >
> > All in all, I think I'd prefer this patch to simply be a KVM_BUG_ON() if
> > kvm_mmu_find_shadow_page() encounters an unsync page.  Less churn, and IMO there's
> > no real loss in robustness, e.g. we'd really have to screw up code review and
> > testing to introduce a null vCPU pointer dereference in this code.
> 
> Agreed about moving the check here and dropping __kvm_sync_page(). But
> I would prefer to retain the vcpu_or_null name (or at least something
> other than "vcpu" to indicate there's something non-standard about
> this pointer).

The least awful idea I've come up with is wrapping the vCPU in a struct, e.g.

	struct sync_page_info {
		void *vcpu;
	}

That provides the contextual information I want, and also provides the hint that
something is odd about the vcpu, which you want.  It's like a very poor man's closure :-)
	
The struct could even be passed by value to avoid the miniscule overhead, and to
make readers look extra hard because it's that much more wierd.

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3d102522804a..068be77a4fff 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2003,8 +2003,13 @@ static void clear_sp_write_flooding_count(u64 *spte)
        __clear_sp_write_flooding_count(sptep_to_sp(spte));
 }

+/* Wrapper to make it difficult to dereference a potentially NULL @vcpu. */
+struct sync_page_info {
+       void *vcpu;
+};
+
 static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm,
-                                                    struct kvm_vcpu *vcpu,
+                                                    struct sync_page_info spi,
                                                     gfn_t gfn,
                                                     struct hlist_head *sp_list,
                                                     union kvm_mmu_page_role role)
@@ -2041,6 +2046,13 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm,
                        goto out;

                if (sp->unsync) {
+                       /*
+                        * Getting indirect shadow pages without a valid @spi
+                        * is not supported, i.e. this should never happen.
+                        */
+                       if (KVM_BUG_ON(!spi.vcpu, kvm))
+                               break;
+
                        /*
                         * The page is good, but is stale.  kvm_sync_page does
                         * get the latest guest state, but (unlike mmu_unsync_children)
@@ -2053,7 +2065,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm,
                         * If the sync fails, the page is zapped.  If so, break
                         * in order to rebuild it.
                         */
-                       ret = kvm_sync_page(vcpu, sp, &invalid_list);
+                       ret = kvm_sync_page(spi.vcpu, sp, &invalid_list);
                        if (ret < 0)
                                break;

@@ -2120,7 +2132,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
 }

 static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
-                                                     struct kvm_vcpu *vcpu,
+                                                     struct sync_page_info spi,
                                                      struct shadow_page_caches *caches,
                                                      gfn_t gfn,
                                                      union kvm_mmu_page_role role)
@@ -2131,7 +2143,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,

        sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];

-       sp = kvm_mmu_find_shadow_page(kvm, vcpu, gfn, sp_list, role);
+       sp = kvm_mmu_find_shadow_page(kvm, spi, gfn, sp_list, role);
        if (!sp) {
                created = true;
                sp = kvm_mmu_alloc_shadow_page(kvm, caches, gfn, sp_list, role);
@@ -2151,7 +2163,11 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
                .gfn_array_cache = &vcpu->arch.mmu_gfn_array_cache,
        };

-       return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
+       struct sync_page_info spi = {
+               .vcpu = vcpu,
+       };
+
+       return __kvm_mmu_get_shadow_page(vcpu->kvm, spi, &caches, gfn, role);
 }

 static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access)


WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Marc Zyngier <maz@kernel.org>, Albert Ou <aou@eecs.berkeley.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<kvm@vger.kernel.org>, Huacai Chen <chenhuacai@kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<linux-mips@vger.kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V \(KVM/riscv\)"
	<kvm-riscv@lists.infradead.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Ben Gardon <bgardon@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 \(KVM/arm64\)"
	<kvmarm@lists.cs.columbia.edu>, Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page()
Date: Mon, 9 May 2022 22:56:57 +0000	[thread overview]
Message-ID: <YnmcOdZILo2LqhAW@google.com> (raw)
In-Reply-To: <CALzav=de1=euis3WocTNBi+xNn1Ypo-GRROQNqmtAKk6q1NUqg@mail.gmail.com>

On Mon, May 09, 2022, David Matlack wrote:
> On Thu, May 5, 2022 at 4:33 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Fri, Apr 22, 2022, David Matlack wrote:
> > > Allow the vcpu pointer in __kvm_mmu_get_shadow_page() to be NULL. Rename
> > > it to vcpu_or_null to prevent future commits from accidentally taking
> > > dependency on it without first considering the NULL case.
> > >
> > > The vcpu pointer is only used for syncing indirect shadow pages in
> > > kvm_mmu_find_shadow_page(). A vcpu pointer it not required for
> > > correctness since unsync pages can simply be zapped. But this should
> > > never occur in practice, since the only use-case for passing a NULL vCPU
> > > pointer is eager page splitting which will only request direct shadow
> > > pages (which can never be unsync).
> > >
> > > Even though __kvm_mmu_get_shadow_page() can gracefully handle a NULL
> > > vcpu, add a WARN() that will fire if __kvm_mmu_get_shadow_page() is ever
> > > called to get an indirect shadow page with a NULL vCPU pointer, since
> > > zapping unsync SPs is a performance overhead that should be considered.
> > >
> > > Signed-off-by: David Matlack <dmatlack@google.com>
> > > ---
> > >  arch/x86/kvm/mmu/mmu.c | 40 ++++++++++++++++++++++++++++++++--------
> > >  1 file changed, 32 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 04029c01aebd..21407bd4435a 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -1845,16 +1845,27 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
> > >         &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)])     \
> > >               if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else
> > >
> > > -static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> > > -                      struct list_head *invalid_list)
> > > +static int __kvm_sync_page(struct kvm *kvm, struct kvm_vcpu *vcpu_or_null,
> > > +                        struct kvm_mmu_page *sp,
> > > +                        struct list_head *invalid_list)
> > >  {
> > > -     int ret = vcpu->arch.mmu->sync_page(vcpu, sp);
> > > +     int ret = -1;
> > > +
> > > +     if (vcpu_or_null)
> >
> > This should never happen.  I like the idea of warning early, but I really don't
> > like that the WARN is far removed from the code that actually depends on @vcpu
> > being non-NULL. Case in point, KVM should have bailed on the WARN and never
> > reached this point.  And the inner __kvm_sync_page() is completely unnecessary.
> 
> Yeah that's fair.
> 
> >
> > I also don't love the vcpu_or_null terminology; I get the intent, but it doesn't
> > really help because understand why/when it's NULL.
> 
> Eh, I don't think it needs to encode why or when. It just needs to
> flag to the reader (and future code authors) that this vcpu pointer
> (unlike all other vcpu pointers in KVM) is NULL in certain cases.

My objection is that without the why/when, developers that aren't familiar with
this code won't know the rules for using vcpu_or_null.  E.g. I don't want to end
up with

	if (vcpu_or_null)
		do x;
	else
		do y;

because inevitably it'll become unclear whether or not that code is actually _correct_.
It might not #GP on a NULL pointer, but it doesn't mean it's correct.

> > I played around with casting, e.g. to/from an unsigned long or void *, to prevent
> > usage, but that doesn't work very well because 'unsigned long' ends up being
> > awkward/confusing, and 'void *' is easily lost on a function call.  And both
> > lose type safety :-(
> 
> Yet another shortcoming of C :(

And lack of closures, which would work very well here.

> (The other being our other discussion about the RET_PF* return codes
> getting easily misinterpreted as KVM's magic return-to-user /
> continue-running-guest return codes.)
> 
> Makes me miss Rust!
> 
> >
> > All in all, I think I'd prefer this patch to simply be a KVM_BUG_ON() if
> > kvm_mmu_find_shadow_page() encounters an unsync page.  Less churn, and IMO there's
> > no real loss in robustness, e.g. we'd really have to screw up code review and
> > testing to introduce a null vCPU pointer dereference in this code.
> 
> Agreed about moving the check here and dropping __kvm_sync_page(). But
> I would prefer to retain the vcpu_or_null name (or at least something
> other than "vcpu" to indicate there's something non-standard about
> this pointer).

The least awful idea I've come up with is wrapping the vCPU in a struct, e.g.

	struct sync_page_info {
		void *vcpu;
	}

That provides the contextual information I want, and also provides the hint that
something is odd about the vcpu, which you want.  It's like a very poor man's closure :-)
	
The struct could even be passed by value to avoid the miniscule overhead, and to
make readers look extra hard because it's that much more wierd.

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3d102522804a..068be77a4fff 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2003,8 +2003,13 @@ static void clear_sp_write_flooding_count(u64 *spte)
        __clear_sp_write_flooding_count(sptep_to_sp(spte));
 }

+/* Wrapper to make it difficult to dereference a potentially NULL @vcpu. */
+struct sync_page_info {
+       void *vcpu;
+};
+
 static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm,
-                                                    struct kvm_vcpu *vcpu,
+                                                    struct sync_page_info spi,
                                                     gfn_t gfn,
                                                     struct hlist_head *sp_list,
                                                     union kvm_mmu_page_role role)
@@ -2041,6 +2046,13 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm,
                        goto out;

                if (sp->unsync) {
+                       /*
+                        * Getting indirect shadow pages without a valid @spi
+                        * is not supported, i.e. this should never happen.
+                        */
+                       if (KVM_BUG_ON(!spi.vcpu, kvm))
+                               break;
+
                        /*
                         * The page is good, but is stale.  kvm_sync_page does
                         * get the latest guest state, but (unlike mmu_unsync_children)
@@ -2053,7 +2065,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm,
                         * If the sync fails, the page is zapped.  If so, break
                         * in order to rebuild it.
                         */
-                       ret = kvm_sync_page(vcpu, sp, &invalid_list);
+                       ret = kvm_sync_page(spi.vcpu, sp, &invalid_list);
                        if (ret < 0)
                                break;

@@ -2120,7 +2132,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
 }

 static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
-                                                     struct kvm_vcpu *vcpu,
+                                                     struct sync_page_info spi,
                                                      struct shadow_page_caches *caches,
                                                      gfn_t gfn,
                                                      union kvm_mmu_page_role role)
@@ -2131,7 +2143,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,

        sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];

-       sp = kvm_mmu_find_shadow_page(kvm, vcpu, gfn, sp_list, role);
+       sp = kvm_mmu_find_shadow_page(kvm, spi, gfn, sp_list, role);
        if (!sp) {
                created = true;
                sp = kvm_mmu_alloc_shadow_page(kvm, caches, gfn, sp_list, role);
@@ -2151,7 +2163,11 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
                .gfn_array_cache = &vcpu->arch.mmu_gfn_array_cache,
        };

-       return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
+       struct sync_page_info spi = {
+               .vcpu = vcpu,
+       };
+
+       return __kvm_mmu_get_shadow_page(vcpu->kvm, spi, &caches, gfn, role);
 }

 static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access)

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-05-09 22:57 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-22 21:05 [PATCH v4 00/20] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-04-22 21:05 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 01/20] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-07  7:46   ` Lai Jiangshan
2022-05-07  7:46     ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 02/20] KVM: x86/mmu: Use a bool for direct David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-07  7:46   ` Lai Jiangshan
2022-05-07  7:46     ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 03/20] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 21:50   ` Sean Christopherson
2022-05-05 21:50     ` Sean Christopherson
2022-05-09 22:10     ` David Matlack
2022-05-09 22:10       ` David Matlack
2022-05-10  2:38       ` Lai Jiangshan
2022-05-10  2:38         ` Lai Jiangshan
2022-05-07  8:28   ` Lai Jiangshan
2022-05-07  8:28     ` Lai Jiangshan
2022-05-09 21:04     ` David Matlack
2022-05-09 21:04       ` David Matlack
2022-05-10  2:58       ` Lai Jiangshan
2022-05-10  2:58         ` Lai Jiangshan
2022-05-10 13:31         ` Sean Christopherson
2022-05-10 13:31           ` Sean Christopherson
2022-05-12 16:10         ` David Matlack
2022-05-12 16:10           ` David Matlack
2022-05-13 18:26           ` David Matlack
2022-05-13 18:26             ` David Matlack
2022-04-22 21:05 ` [PATCH v4 04/20] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 21:58   ` Sean Christopherson
2022-05-05 21:58     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 05/20] KVM: x86/mmu: Consolidate shadow page allocation and initialization David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 22:10   ` Sean Christopherson
2022-05-05 22:10     ` Sean Christopherson
2022-05-09 20:53     ` David Matlack
2022-05-09 20:53       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 06/20] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 22:15   ` Sean Christopherson
2022-05-05 22:15     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 07/20] KVM: x86/mmu: Move guest PT write-protection to account_shadowed() David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 22:51   ` Sean Christopherson
2022-05-05 22:51     ` Sean Christopherson
2022-05-09 21:18     ` David Matlack
2022-05-09 21:18       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 08/20] KVM: x86/mmu: Pass memory caches to allocate SPs separately David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 23:00   ` Sean Christopherson
2022-05-05 23:00     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 09/20] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 10/20] KVM: x86/mmu: Pass kvm pointer separately from vcpu to kvm_mmu_find_shadow_page() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page() David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 23:33   ` Sean Christopherson
2022-05-05 23:33     ` Sean Christopherson
2022-05-09 21:26     ` David Matlack
2022-05-09 21:26       ` David Matlack
2022-05-09 22:56       ` Sean Christopherson [this message]
2022-05-09 22:56         ` Sean Christopherson
2022-05-09 23:59         ` David Matlack
2022-05-09 23:59           ` David Matlack
2022-04-22 21:05 ` [PATCH v4 12/20] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 13/20] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 23:46   ` Sean Christopherson
2022-05-05 23:46     ` Sean Christopherson
2022-05-09 21:27     ` David Matlack
2022-05-09 21:27       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 14/20] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 15/20] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-06 19:47   ` Sean Christopherson
2022-05-06 19:47     ` Sean Christopherson
2022-05-09 16:10   ` Sean Christopherson
2022-05-09 16:10     ` Sean Christopherson
2022-05-09 21:29     ` David Matlack
2022-05-09 21:29       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 16/20] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-09 16:22   ` Sean Christopherson
2022-05-09 16:22     ` Sean Christopherson
2022-05-09 21:31     ` David Matlack
2022-05-09 21:31       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 17/20] KVM: x86/mmu: Zap collapsible SPTEs at all levels in " David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-09 16:31   ` Sean Christopherson
2022-05-09 16:31     ` Sean Christopherson
2022-05-09 21:34     ` David Matlack
2022-05-09 21:34       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 18/20] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-09 16:36   ` Sean Christopherson
2022-05-09 16:36     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 19/20] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-23  8:08   ` kernel test robot
2022-04-23  8:08     ` kernel test robot
2022-04-24 15:21   ` kernel test robot
2022-04-24 15:21     ` kernel test robot
2022-04-22 21:05 ` [PATCH v4 20/20] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-07  7:51   ` Lai Jiangshan
2022-05-07  7:51     ` Lai Jiangshan
2022-05-09 21:40     ` David Matlack
2022-05-09 21:40       ` David Matlack
2022-05-09 16:48   ` Sean Christopherson
2022-05-09 16:48     ` Sean Christopherson
2022-05-09 21:44     ` David Matlack
2022-05-09 21:44       ` David Matlack
2022-05-09 22:47       ` Sean Christopherson
2022-05-09 22:47         ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YnmcOdZILo2LqhAW@google.com \
    --to=seanjc@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=bgardon@google.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=drjones@redhat.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-mips@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.