All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kai Huang <kai.huang@intel.com>
To: Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Michael Roth <michael.roth@amd.com>,
	Tom Lendacky <thomas.lendacky@amd.com>
Subject: Re: [PATCH 2/4] KVM: x86/mmu: Fully re-evaluate MMIO caching when SPTE masks change
Date: Fri, 29 Jul 2022 14:39:10 +1200	[thread overview]
Message-ID: <9104e22da628fef86a6e8a02d9d2e81814a9d598.camel@intel.com> (raw)
In-Reply-To: <20220728221759.3492539-3-seanjc@google.com>

On Thu, 2022-07-28 at 22:17 +0000, Sean Christopherson wrote:
> Fully re-evaluate whether or not MMIO caching can be enabled when SPTE
> masks change; simply clearing enable_mmio_caching when a configuration
> isn't compatible with caching fails to handle the scenario where the
> masks are updated, e.g. by VMX for EPT or by SVM to account for the C-bit
> location, and toggle compatibility from false=>true.
> 
> Snapshot the original module param so that re-evaluating MMIO caching
> preserves userspace's desire to allow caching.  Use a snapshot approach
> so that enable_mmio_caching still reflects KVM's actual behavior.
> 
> Fixes: 8b9e74bfbf8c ("KVM: x86/mmu: Use enable_mmio_caching to track if MMIO caching is enabled")
> Reported-by: Michael Roth <michael.roth@amd.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/mmu/mmu.c  |  4 ++++
>  arch/x86/kvm/mmu/spte.c | 19 +++++++++++++++++++
>  arch/x86/kvm/mmu/spte.h |  1 +
>  3 files changed, 24 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 2975fcb14c86..660f58928252 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6699,11 +6699,15 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
>  /*
>   * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as
>   * its default value of -1 is technically undefined behavior for a boolean.
> + * Forward the module init call to SPTE code so that it too can handle module
> + * params that need to be resolved/snapshot.
>   */
>  void __init kvm_mmu_x86_module_init(void)
>  {
>  	if (nx_huge_pages == -1)
>  		__set_nx_huge_pages(get_nx_auto_mode());
> +
> +	kvm_mmu_spte_module_init();
>  }
>  
>  /*
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 7314d27d57a4..66f76f5a15bd 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -20,6 +20,7 @@
>  #include <asm/vmx.h>
>  
>  bool __read_mostly enable_mmio_caching = true;
> +static bool __ro_after_init allow_mmio_caching;
>  module_param_named(mmio_caching, enable_mmio_caching, bool, 0444);
>  
>  u64 __read_mostly shadow_host_writable_mask;
> @@ -43,6 +44,18 @@ u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
>  
>  u8 __read_mostly shadow_phys_bits;
>  
> +void __init kvm_mmu_spte_module_init(void)
> +{
> +	/*
> +	 * Snapshot userspace's desire to allow MMIO caching.  Whether or not
> +	 * KVM can actually enable MMIO caching depends on vendor-specific
> +	 * hardware capabilities and other module params that can't be resolved
> +	 * until the vendor module is loaded, i.e. enable_mmio_caching can and
> +	 * will change when the vendor module is (re)loaded.
> +	 */
> +	allow_mmio_caching = enable_mmio_caching;
> +}
> +
>  static u64 generation_mmio_spte_mask(u64 gen)
>  {
>  	u64 mask;
> @@ -340,6 +353,12 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	BUG_ON((u64)(unsigned)access_mask != access_mask);
>  	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
>  
> +	/*
> +	 * Reset to the original module param value to honor userspace's desire
> +	 * to (dis)allow MMIO caching.  Update the param itself so that
> +	 * userspace can see whether or not KVM is actually using MMIO caching.
> +	 */
> +	enable_mmio_caching = allow_mmio_caching;

I think the problem comes from MMIO caching mask/value are firstly set in
kvm_mmu_reset_all_pte_masks() (which calls kvm_mmu_set_mmio_spte_mask() and may
change enable_mmio_caching), and later vendor specific code _may_ or _may_not_
call kvm_mmu_set_mmio_spte_mask() again to adjust the mask/value.  And when it
does, the second call from vendor specific code shouldn't depend on the
'enable_mmio_caching' value calculated in the first call in 
kvm_mmu_reset_all_pte_masks().

Instead of using 'allow_mmio_caching', should we just remove
kvm_mmu_set_mmio_spte_mask() in kvm_mmu_reset_all_pte_masks() and enforce vendor
specific code to always call kvm_mmu_set_mmio_spte_mask() depending on whatever
hardware feature the vendor uses?

I am suggesting this way because in Isaku's TDX patch

[PATCH v7 037/102] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis

we will enable per-VM MMIO mask/value, which will remove global
shadow_mmio_mask/shadow_mmio_value, and I am already suggesting something
similar:

https://lore.kernel.org/all/20220719084737.GU1379820@ls.amr.corp.intel.com/


  reply	other threads:[~2022-07-29  2:39 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-28 22:17 [PATCH 0/4] KVM: x86/mmu: MMIO caching bug fixes Sean Christopherson
2022-07-28 22:17 ` [PATCH 1/4] KVM: x86: Tag kvm_mmu_x86_module_init() with __init Sean Christopherson
2022-07-29  2:14   ` Kai Huang
2022-07-28 22:17 ` [PATCH 2/4] KVM: x86/mmu: Fully re-evaluate MMIO caching when SPTE masks change Sean Christopherson
2022-07-29  2:39   ` Kai Huang [this message]
2022-07-29 15:07     ` Sean Christopherson
2022-08-01  9:24       ` Kai Huang
2022-08-01 14:15         ` Sean Christopherson
2022-08-01 20:46           ` Kai Huang
2022-08-01 23:20             ` Sean Christopherson
2022-08-02  0:05               ` Kai Huang
2022-08-02 21:15                 ` Sean Christopherson
2022-08-02 22:19                   ` Kai Huang
2022-08-02 23:05                     ` Sean Christopherson
2022-08-02 23:42                       ` Kai Huang
2022-07-28 22:17 ` [PATCH 3/4] KVM: SVM: Adjust MMIO masks (for caching) before doing SEV(-ES) setup Sean Christopherson
2022-07-29  2:06   ` Kai Huang
2022-07-29 18:15     ` Sean Christopherson
2022-07-28 22:17 ` [PATCH 4/4] KVM: SVM: Disable SEV-ES support if MMIO caching is disable Sean Christopherson
2022-07-29  2:12   ` Kai Huang
2022-07-29 15:21     ` Sean Christopherson
2022-08-01  9:30       ` Kai Huang
2022-07-29  1:09 ` [PATCH 0/4] KVM: x86/mmu: MMIO caching bug fixes Michael Roth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9104e22da628fef86a6e8a02d9d2e81814a9d598.camel@intel.com \
    --to=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michael.roth@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=thomas.lendacky@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.