All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Peter Xu <peterx@redhat.com>, Peter Shier <pshier@google.com>,
	David Matlack <dmatlack@google.com>,
	Mingwei Zhang <mizhang@google.com>,
	Yulei Zhang <yulei.kernel@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>,
	Kai Huang <kai.huang@intel.com>,
	Keqian Zhu <zhukeqian1@huawei.com>,
	David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH 11/15] KVM: x86/MMU: Refactor vmx_get_mt_mask
Date: Mon, 22 Nov 2021 10:11:53 -0800	[thread overview]
Message-ID: <CANgfPd-_7tR9tSJg85-0wAG72454qeedovhBvbX6OS1YNRxvMw@mail.gmail.com> (raw)
In-Reply-To: <942d487e-ba6b-9c60-e200-3590524137b9@redhat.com>

On Fri, Nov 19, 2021 at 1:03 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 11/18/21 16:30, Sean Christopherson wrote:
> > On Thu, Nov 18, 2021, Paolo Bonzini wrote:
> >> On 11/16/21 00:45, Ben Gardon wrote:
> >>> Remove the gotos from vmx_get_mt_mask to make it easier to separate out
> >>> the parts which do not depend on vcpu state.
> >>>
> >>> No functional change intended.
> >>>
> >>>
> >>> Signed-off-by: Ben Gardon <bgardon@google.com>
> >>
> >> Queued, thanks (with a slightly edited commit message; the patch is a
> >> simplification anyway).
> >
> > Don't know waht message you've queued, but just in case you kept some of the original,
> > can you further edit it to remove any snippets that mention separating out the parts
> > that don't depend on vCPU state?
>
> Indeed I did keep some:
>
> commit b7297e02826857e068d03f844c8336ce48077d78
> Author: Ben Gardon <bgardon@google.com>
> Date:   Mon Nov 15 15:45:59 2021 -0800
>
>      KVM: x86/MMU: Simplify flow of vmx_get_mt_mask
>
>      Remove the gotos from vmx_get_mt_mask.  This may later make it easier
>      to separate out the parts which do not depend on vcpu state, but it also
>      simplifies the code in general.
>
>      No functional change intended.
>
> i.e. keeping it conditional but I can edit it further, like
>
>      Remove the gotos from vmx_get_mt_mask.  It's easier to build the whole
>      memory type at once, than it is to combine separate cacheability and ipat
>      fields.
>
> Paolo
>
> > IMO, we should not separate vmx_get_mt_mask() into per-VM and per-vCPU variants,
> > because the per-vCPU variant is a lie.  The memtype of a SPTE is not tracked anywhere,
> > which means that if the guest has non-uniform CR0.CD/NW or MTRR settings, KVM will
> > happily let the guest consumes SPTEs with the incorrect memtype.  In practice, this
> > isn't an issue because no sane BIOS or kernel uses per-CPU MTRRs, nor do they have
> > DMA operations running while the cacheability state is in flux.
> >
> > If we really want to make this state per-vCPU, KVM would need to incorporate the
> > CR0.CD and MTRR settings in kvm_mmu_page_role.  For MTRRs in particular, the worst
> > case scenario is that every vCPU has different MTRR settings, which means that
> > kvm_mmu_page_role would need to be expanded by 10 bits in order to track every
> > possible vcpu_idx (currently capped at 1024).
>
> Yes, that's insanity.  I was also a bit skeptical about Ben's try_get_mt_mask callback,
> but this would be much much worse.

Yeah, the implementation of that felt a bit kludgy to me too, but
refactoring the handling of all those CR bits was way more complex
than I wanted to handle in this patch set.
I'd love to see some of those CR0 / MTRR settings be set on a VM basis
and enforced as uniform across vCPUs.
Looking up vCPU 0 and basing things on that feels extra hacky though,
especially if we're still not asserting uniformity of settings across
vCPUs.
If we need to track that state to accurately virtualize the hardware
though, that would be unfortunate.

>
> Paolo
>
> > So unless we want to massively complicate kvm_mmu_page_role and gfn_track for a
> > scenario no one cares about, I would strongly prefer to acknowledge that KVM assumes
> > memtypes are a per-VM property, e.g. on top:
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 77f45c005f28..8a84d30f1dbd 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -6984,8 +6984,9 @@ static int __init vmx_check_processor_compat(void)
> >          return 0;
> >   }
> >
> > -static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> > +static u64 vmx_get_mt_mask(struct kvm *kvm, gfn_t gfn, bool is_mmio)
> >   {
> > +       struct kvm_vcpu *vcpu;
> >          u8 cache;
> >
> >          /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in
> > @@ -7009,11 +7010,15 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> >          if (is_mmio)
> >                  return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT;
> >
> > -       if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
> > +       if (!kvm_arch_has_noncoherent_dma(kvm))
> >                  return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
> >
> > +       vcpu = kvm_get_vcpu_by_id(kvm, 0);
> > +       if (KVM_BUG_ON(!vcpu, kvm))
> > +               return;
> > +
> >          if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
> > -               if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
> > +               if (kvm_check_has_quirk(kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
> >                          cache = MTRR_TYPE_WRBACK;
> >                  else
> >                          cache = MTRR_TYPE_UNCACHABLE;
> >
>

  reply	other threads:[~2021-11-22 18:12 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-15 23:45 [PATCH 00/15] Currently disabling dirty logging with the TDP MMU is extremely slow. On a 96 vCPU / 96G VM it takes ~45 seconds to disable dirty logging with the TDP MMU, as opposed to ~3.5 seconds with the legacy MMU. This series optimizes TLB flushes and introduces in-place large page promotion, to bring the disable dirty log time down to ~2 seconds Ben Gardon
2021-11-15 23:45 ` [PATCH 01/15] KVM: x86/mmu: Remove redundant flushes when disabling dirty logging Ben Gardon
2021-11-18  8:26   ` Paolo Bonzini
2021-11-15 23:45 ` [PATCH 02/15] KVM: x86/mmu: Introduce vcpu_make_spte Ben Gardon
2021-11-15 23:45 ` [PATCH 03/15] KVM: x86/mmu: Factor wrprot for nested PML out of make_spte Ben Gardon
2021-11-15 23:45 ` [PATCH 04/15] KVM: x86/mmu: Factor mt_mask " Ben Gardon
2021-11-15 23:45 ` [PATCH 05/15] KVM: x86/mmu: Remove need for a vcpu from kvm_slot_page_track_is_active Ben Gardon
2021-11-18  8:25   ` Paolo Bonzini
2021-11-15 23:45 ` [PATCH 06/15] KVM: x86/mmu: Remove need for a vcpu from mmu_try_to_unsync_pages Ben Gardon
2021-11-18  8:25   ` Paolo Bonzini
2021-11-15 23:45 ` [PATCH 07/15] KVM: x86/mmu: Factor shadow_zero_check out of make_spte Ben Gardon
2021-11-15 23:45 ` [PATCH 08/15] KVM: x86/mmu: Replace vcpu argument with kvm pointer in make_spte Ben Gardon
2021-11-15 23:45 ` [PATCH 09/15] KVM: x86/mmu: Factor out the meat of reset_tdp_shadow_zero_bits_mask Ben Gardon
2021-11-15 23:45 ` [PATCH 10/15] KVM: x86/mmu: Propagate memslot const qualifier Ben Gardon
2021-11-18  8:27   ` Paolo Bonzini
2021-11-15 23:45 ` [PATCH 11/15] KVM: x86/MMU: Refactor vmx_get_mt_mask Ben Gardon
2021-11-18  8:30   ` Paolo Bonzini
2021-11-18 15:30     ` Sean Christopherson
2021-11-19  9:02       ` Paolo Bonzini
2021-11-22 18:11         ` Ben Gardon [this message]
2021-11-22 18:46           ` Sean Christopherson
2021-11-15 23:46 ` [PATCH 12/15] KVM: x86/mmu: Factor out part of vmx_get_mt_mask which does not depend on vcpu Ben Gardon
2021-11-15 23:46 ` [PATCH 13/15] KVM: x86/mmu: Add try_get_mt_mask to x86_ops Ben Gardon
2021-11-15 23:46 ` [PATCH 14/15] KVM: x86/mmu: Make kvm_is_mmio_pfn usable outside of spte.c Ben Gardon
2021-11-15 23:46 ` [PATCH 15/15] KVM: x86/mmu: Promote pages in-place when disabling dirty logging Ben Gardon
2021-11-25  4:18   ` Peter Xu
2021-11-29 18:31     ` Ben Gardon
2021-11-30  0:13       ` Sean Christopherson
2021-11-30  7:28       ` Peter Xu
2021-11-30 16:01         ` Sean Christopherson
2021-12-01  1:59           ` Peter Xu
2021-11-15 23:58 ` [PATCH 00/15] Currently disabling dirty logging with the TDP MMU is extremely slow. On a 96 vCPU / 96G VM it takes ~45 seconds to disable dirty logging with the TDP MMU, as opposed to ~3.5 seconds with the legacy MMU. This series optimizes TLB flushes and introduces in-place large page promotion, to bring the disable dirty log time down to ~2 seconds Ben Gardon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANgfPd-_7tR9tSJg85-0wAG72454qeedovhBvbX6OS1YNRxvMw@mail.gmail.com \
    --to=bgardon@google.com \
    --cc=david@redhat.com \
    --cc=dmatlack@google.com \
    --cc=kai.huang@intel.com \
    --cc=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mizhang@google.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pshier@google.com \
    --cc=seanjc@google.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    --cc=zhukeqian1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.