All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yang Weijiang <weijiang.yang@intel.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Yang Weijiang <weijiang.yang@intel.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	pbonzini@redhat.com, jmattson@google.com,
	yu.c.zhang@linux.intel.com, alazar@bitdefender.com,
	edwin.zhai@intel.com
Subject: Re: [RESEND PATCH v10 06/10] vmx: spp: Set up SPP paging table at vmentry/vmexit
Date: Mon, 13 Jan 2020 16:10:50 +0800	[thread overview]
Message-ID: <20200113081050.GF12253@local-michael-cet-test.sh.intel.com> (raw)
In-Reply-To: <20200110180458.GG21485@linux.intel.com>

On Fri, Jan 10, 2020 at 10:04:59AM -0800, Sean Christopherson wrote:
> On Thu, Jan 02, 2020 at 02:13:15PM +0800, Yang Weijiang wrote:
> > @@ -3585,7 +3602,30 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
> >  		if ((error_code & PFERR_WRITE_MASK) &&
> >  		    spte_can_locklessly_be_made_writable(spte))
> >  		{
> > -			new_spte |= PT_WRITABLE_MASK;
> > +			/*
> > +			 * Record write protect fault caused by
> > +			 * Sub-page Protection, let VMI decide
> > +			 * the next step.
> > +			 */
> > +			if (spte & PT_SPP_MASK) {
> > +				int len = kvm_x86_ops->get_inst_len(vcpu);
> 
> There's got to be a better way to handle SPP exits than adding a helper
> to retrieve the instruction length.
>
The fault instruction was skipped by kvm_skip_emulated_instruction()
before, but Paolo suggested leave the re-do or skip option to user-space
to make it flexible for write protection or write tracking, so return
length to user-space.

> > +
> > +				fault_handled = true;
> > +				vcpu->run->exit_reason = KVM_EXIT_SPP;
> > +				vcpu->run->spp.addr = gva;
> > +				vcpu->run->spp.ins_len = len;
> 
> s/ins_len/insn_len to be consistent with other KVM nomenclature.
> 
OK.

> > +				trace_kvm_spp_induced_page_fault(vcpu,
> > +								 gva,
> > +								 len);
> > +				break;
> > +			}
> > +
> > +			if (was_spp_armed(new_spte)) {
> > +				restore_spp_bit(&new_spte);
> > +				spp_protected = true;
> > +			} else {
> > +				new_spte |= PT_WRITABLE_MASK;
> > +			}
> >  
> >  			/*
> >  			 * Do not fix write-permission on the large spte.  Since
> 
> ...
> 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 24e4e1c47f42..97d862c79124 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -200,7 +200,6 @@ static const struct {
> >  	[VMENTER_L1D_FLUSH_EPT_DISABLED] = {"EPT disabled", false},
> >  	[VMENTER_L1D_FLUSH_NOT_REQUIRED] = {"not required", false},
> >  };
> > -
> 
> Spurious whitepsace.
>
OK.

> >  #define L1D_CACHE_ORDER 4
> >  static void *vmx_l1d_flush_pages;
> >  
> > @@ -2999,6 +2998,7 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
> >  	bool update_guest_cr3 = true;
> >  	unsigned long guest_cr3;
> >  	u64 eptp;
> > +	u64 spptp;
> >  
> >  	guest_cr3 = cr3;
> >  	if (enable_ept) {
> > @@ -3027,6 +3027,12 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
> >  
> >  	if (update_guest_cr3)
> >  		vmcs_writel(GUEST_CR3, guest_cr3);
> > +
> > +	if (kvm->arch.spp_active && VALID_PAGE(vcpu->kvm->arch.sppt_root)) {
> > +		spptp = construct_spptp(vcpu->kvm->arch.sppt_root);
> > +		vmcs_write64(SPPT_POINTER, spptp);
> > +		vmx_flush_tlb(vcpu, true);
> 
> Why is SPP so special that it gets to force TLB flushes?
I double checked the code, there's a call to vmx_flush_tlb() in
mmu_load(), so it's unnecessary here, thank you!

> > +	}
> >  }
> >  
> >  int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> > @@ -5361,6 +5367,74 @@ static int handle_monitor_trap(struct kvm_vcpu *vcpu)
> >  	return 1;
> >  }
> >  
> > +int handle_spp(struct kvm_vcpu *vcpu)
> 
> Can be static.
>
Thanks!

> > +{
> > +	unsigned long exit_qualification;
> > +	struct kvm_memory_slot *slot;
> > +	gpa_t gpa;
> > +	gfn_t gfn;
> > +
> > +	exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
> > +
> > +	/*
> > +	 * SPP VM exit happened while executing iret from NMI,
> > +	 * "blocked by NMI" bit has to be set before next VM entry.
> > +	 * There are errata that may cause this bit to not be set:
> > +	 * AAK134, BY25.
> > +	 */
> > +	if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
> > +	    (exit_qualification & SPPT_INTR_INFO_UNBLOCK_NMI))
> > +		vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
> > +			      GUEST_INTR_STATE_NMI);
> > +
> > +	vcpu->arch.exit_qualification = exit_qualification;
> 
> 	if (WARN_ON(!(exit_qualification & SPPT_INDUCED_EXIT_TYPE)))
> 		goto out_err;
> 
> 	<handle spp exit>
> 
> 	return 1;
> 
> out_err:
> 	vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
> 	vcpu->run->hw.hardware_exit_reason = EXIT_REASON_SPP;
> 	return 0;
>
Sure, will change it.

> > +	if (exit_qualification & SPPT_INDUCED_EXIT_TYPE) {
> > +		int page_num = KVM_PAGES_PER_HPAGE(PT_DIRECTORY_LEVEL);
> 
> The compiler is probably clever enough to make these constants, but if
> this logic is a fundamental property of SPP then it should be defined as
> a macro somewhere.
>
OK, will change it.

> > +		u32 *access;
> > +		gfn_t gfn_max;
> > +
> > +		/*
> > +		 * SPPT missing
> > +		 * We don't set SPP write access for the corresponding
> > +		 * GPA, if we haven't setup, we need to construct
> > +		 * SPP table here.
> > +		 */
> > +		gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> > +		gfn = gpa >> PAGE_SHIFT;
> 
> gpa_to_gfn()
>
OK.

> > +		trace_kvm_spp_induced_exit(vcpu, gpa, exit_qualification);
> > +		/*
> > +		 * In level 1 of SPPT, there's no PRESENT bit, all data is
> > +		 * regarded as permission vector, so need to check from
> > +		 * level 2 to set up the vector if target page is protected.
> > +		 */
> > +		spin_lock(&vcpu->kvm->mmu_lock);
> > +		gfn &= ~(page_num - 1);
> 
> 
> 
> > +		gfn_max = gfn + page_num - 1;
> 
> s/gfn_max/gfn_end
OK.

> 
> > +		for (; gfn <= gfn_max; gfn++) {
> 
> My preference would be to do:
> 		gfn_end = gfn + page_num;
> 
> 		for ( ; gfn < gfn_end; gfn++)
>
Thank you!
> > +			slot = gfn_to_memslot(vcpu->kvm, gfn);


  reply	other threads:[~2020-01-13  8:06 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-02  6:13 [RESEND PATCH v10 00/10] Enable Sub-Page Write Protection Support Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 01/10] Documentation: Add EPT based Subpage Protection and related APIs Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 02/10] vmx: spp: Add control flags for Sub-Page Protection(SPP) Yang Weijiang
2020-01-10 16:58   ` Sean Christopherson
2020-01-13  5:44     ` Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 03/10] mmu: spp: Add SPP Table setup functions Yang Weijiang
2020-01-10 17:26   ` Sean Christopherson
2020-01-13  6:00     ` Yang Weijiang
2020-01-10 17:40   ` Sean Christopherson
2020-01-13  6:04     ` Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 04/10] mmu: spp: Add functions to operate SPP access bitmap Yang Weijiang
2020-01-10 17:38   ` Sean Christopherson
2020-01-13  6:15     ` Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 05/10] x86: spp: Introduce user-space SPP IOCTLs Yang Weijiang
2020-01-10 18:10   ` Sean Christopherson
2020-01-13  8:21     ` Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 06/10] vmx: spp: Set up SPP paging table at vmentry/vmexit Yang Weijiang
2020-01-10 17:55   ` Sean Christopherson
2020-01-13  6:50     ` Yang Weijiang
2020-01-21 14:01     ` Paolo Bonzini
2020-01-10 18:04   ` Sean Christopherson
2020-01-13  8:10     ` Yang Weijiang [this message]
2020-01-13 17:33       ` Sean Christopherson
2020-01-13 18:55         ` Adalbert Lazăr
2020-01-13 21:47           ` Sean Christopherson
2020-01-14  3:08         ` Yang Weijiang
2020-01-14 18:58           ` Sean Christopherson
2020-01-15  1:36             ` Yang Weijiang
2020-01-21 14:14             ` Paolo Bonzini
2020-01-02  6:13 ` [RESEND PATCH v10 07/10] mmu: spp: Enable Lazy mode SPP protection Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 08/10] mmu: spp: Handle SPP protected pages when VM memory changes Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 09/10] x86: spp: Add SPP protection check in emulation Yang Weijiang
2020-01-02  6:13 ` [RESEND PATCH v10 10/10] kvm: selftests: selftest for Sub-Page protection Yang Weijiang
  -- strict thread matches above, loose matches on Subject: below --
2020-01-02  5:18 [RESEND PATCH v10 00/10] Enable Sub-Page Write Protection Support Yang Weijiang
2020-01-02  5:19 ` [RESEND PATCH v10 06/10] vmx: spp: Set up SPP paging table at vmentry/vmexit Yang Weijiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200113081050.GF12253@local-michael-cet-test.sh.intel.com \
    --to=weijiang.yang@intel.com \
    --cc=alazar@bitdefender.com \
    --cc=edwin.zhai@intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=yu.c.zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.