linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv4 00/11] SVM: virtual NMI
@ 2023-02-27  8:40 Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR Santosh Shukla
                   ` (13 more replies)
  0 siblings, 14 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets


v2:
https://lore.kernel.org/all/0f56e139-4c7f-5220-a4a2-99f87f45fd83@amd.com/

v3:
https://lore.kernel.org/all/20230227035400.1498-1-santosh.shukla@amd.com/
 - 09/11: Clubbed x86_ops delayed NMI with vNMI changes into one,
   for better readability purpose (Sean Suggestion)
 - Series includes suggestion and fixes proposed in v2 series.
   Refer each patch for change history(v2-->v3).

v4:
 - Missed sending 01/11 patch in v3.

Series based on [1] and tested on AMD EPYC-Genoa.


APM: ((Ch-15.21.10 - NMI Virtualization)
https://www.amd.com/en/support/tech-docs/amd64-architecture-programmers-manual-volumes-1-5

Past history and work refer v5-
https://lkml.org/lkml/2022/10/27/261

Thanks,
Santosh
[1] https://github.com/kvm-x86/linux branch kvm-x86/next(62ef199250cd46f)



Maxim Levitsky (2):
  KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
  KVM: SVM: add wrappers to enable/disable IRET interception

Santosh Shukla (6):
  KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is
    intercepting VINTR
  KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
  x86/cpu: Add CPUID feature bit for VNMI
  KVM: SVM: Add VNMI bit definition
  KVM: x86: add support for delayed virtual NMI injection interface
  KVM: nSVM: implement support for nested VNMI

Sean Christopherson (3):
  KVM: x86: Raise an event request when processing NMIs if an NMI is
    pending
  KVM: x86: Tweak the code and comment related to handling concurrent
    NMIs
  KVM: x86: Save/restore all NMIs when multiple NMIs are pending

 arch/x86/include/asm/cpufeatures.h |   1 +
 arch/x86/include/asm/kvm-x86-ops.h |   2 +
 arch/x86/include/asm/kvm_host.h    |  11 ++-
 arch/x86/include/asm/svm.h         |   9 ++
 arch/x86/kvm/svm/nested.c          |  94 +++++++++++++++---
 arch/x86/kvm/svm/svm.c             | 152 +++++++++++++++++++++++------
 arch/x86/kvm/svm/svm.h             |  28 ++++++
 arch/x86/kvm/x86.c                 |  46 +++++++--
 8 files changed, 289 insertions(+), 54 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv4 01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 02/11] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0 Santosh Shukla
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Santosh Shukla <Santosh.Shukla@amd.com>

Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting
virtual interrupts in order to request an interrupt window, as KVM
has usurped vmcb02's int_ctl.  If an interrupt window opens before
the next VM-Exit, svm_clear_vintr() will restore vmcb12's int_ctl.
If no window opens, V_IRQ will be correctly preserved in vmcb12's
int_ctl (because it was never recognized while L2 was running).

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---
v4:
https://lore.kernel.org/all/Y9hybI65So5X2LFg@google.com/
suggested by Sean.

 arch/x86/kvm/svm/nested.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 05d38944a6c0..fbade158d368 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -416,18 +416,17 @@ void nested_sync_control_from_vmcb02(struct vcpu_svm *svm)
 
 	/* Only a few fields of int_ctl are written by the processor.  */
 	mask = V_IRQ_MASK | V_TPR_MASK;
-	if (!(svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) &&
-	    svm_is_intercept(svm, INTERCEPT_VINTR)) {
-		/*
-		 * In order to request an interrupt window, L0 is usurping
-		 * svm->vmcb->control.int_ctl and possibly setting V_IRQ
-		 * even if it was clear in L1's VMCB.  Restoring it would be
-		 * wrong.  However, in this case V_IRQ will remain true until
-		 * interrupt_window_interception calls svm_clear_vintr and
-		 * restores int_ctl.  We can just leave it aside.
-		 */
+	/*
+	 * Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting
+	 * virtual interrupts in order to request an interrupt window, as KVM
+	 * has usurped vmcb02's int_ctl.  If an interrupt window opens before
+	 * the next VM-Exit, svm_clear_vintr() will restore vmcb12's int_ctl.
+	 * If no window opens, V_IRQ will be correctly preserved in vmcb12's
+	 * int_ctl (because it was never recognized while L2 was running).
+	 */
+	if (svm_is_intercept(svm, INTERCEPT_VINTR) &&
+	   !test_bit(INTERCEPT_VINTR, (unsigned long *)svm->nested.ctl.intercepts))
 		mask &= ~V_IRQ_MASK;
-	}
 
 	if (nested_vgif_enabled(svm))
 		mask |= V_GIF_MASK;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 02/11] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 03/11] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs Santosh Shukla
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Santosh Shukla <Santosh.Shukla@amd.com>

Disable intercept of virtual interrupts (used to
detect interrupt windows) if the saved RFLAGS.IF is '0', as
the effective RFLAGS.IF for L1 interrupts will never be set
while L2 is running (L2's RFLAGS.IF doesn't affect L1 IRQs).

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---
v3:
https://lore.kernel.org/all/Y9hybI65So5X2LFg@google.com/
suggested by Sean.

 arch/x86/kvm/svm/nested.c | 15 ++++++++++-----
 arch/x86/kvm/svm/svm.c    | 10 ++++++++++
 2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index fbade158d368..107258ed46ee 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -139,13 +139,18 @@ void recalc_intercepts(struct vcpu_svm *svm)
 
 	if (g->int_ctl & V_INTR_MASKING_MASK) {
 		/*
-		 * Once running L2 with HF_VINTR_MASK, EFLAGS.IF and CR8
-		 * does not affect any interrupt we may want to inject;
-		 * therefore, writes to CR8 are irrelevant to L0, as are
-		 * interrupt window vmexits.
+		 * If L2 is active and V_INTR_MASKING is enabled in vmcb12,
+		 * disable intercept of CR8 writes as L2's CR8 does not affect
+		 * any interrupt KVM may want to inject.
+		 *
+		 * Similarly, disable intercept of virtual interrupts (used to
+		 * detect interrupt windows) if the saved RFLAGS.IF is '0', as
+		 * the effective RFLAGS.IF for L1 interrupts will never be set
+		 * while L2 is running (L2's RFLAGS.IF doesn't affect L1 IRQs).
 		 */
 		vmcb_clr_intercept(c, INTERCEPT_CR8_WRITE);
-		vmcb_clr_intercept(c, INTERCEPT_VINTR);
+		if (!(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF))
+			vmcb_clr_intercept(c, INTERCEPT_VINTR);
 	}
 
 	/*
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index b43775490074..cf6ae093ed19 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1583,6 +1583,16 @@ static void svm_set_vintr(struct vcpu_svm *svm)
 
 	svm_set_intercept(svm, INTERCEPT_VINTR);
 
+	/*
+	 * Recalculating intercepts may have clear the VINTR intercept.  If
+	 * V_INTR_MASKING is enabled in vmcb12, then the effective RFLAGS.IF
+	 * for L1 physical interrupts is L1's RFLAGS.IF at the time of VMRUN.
+	 * Requesting an interrupt window if save.RFLAGS.IF=0 is pointless as
+	 * interrupts will never be unblocked while L2 is running.
+	 */
+	if (!svm_is_intercept(svm, INTERCEPT_VINTR))
+		return;
+
 	/*
 	 * This is just a dummy VINTR to actually cause a vmexit to happen.
 	 * Actual injection of virtual interrupts happens through EVENTINJ.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 03/11] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 02/11] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0 Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 04/11] KVM: SVM: add wrappers to enable/disable IRET interception Santosh Shukla
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Maxim Levitsky <mlevitsk@redhat.com>

If the L1 doesn't intercept interrupts, then the KVM will use vmcb02's
V_IRQ for L1 (to detect an interrupt window)

In this case on nested VM exit KVM might need to copy the V_IRQ bit
from the vmcb02 to the vmcb01, to continue waiting for the
interrupt window.

To make it simple, just raise the KVM_REQ_EVENT request, which
execution will lead to the reenabling of the interrupt
window if needed.

Note that this is a theoretical bug because KVM already does raise
KVM_REQ_EVENT request on each nested VM exit because the nested
VM exit resets RFLAGS and the kvm_set_rflags() raises the
KVM_REQ_EVENT request in the response.

However raising this request explicitly, together with
documenting why this is needed, is still preferred.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
[reworded description as per Sean's v2 comment]
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---
v3:
Reworded commit description per Sean's v2 comment:
https://lore.kernel.org/all/Y9RypRsfpLteK51v@google.com/

 arch/x86/kvm/svm/nested.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 107258ed46ee..74e9e9e76d77 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1025,6 +1025,31 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 
 	svm_switch_vmcb(svm, &svm->vmcb01);
 
+	/* Note about synchronizing some of int_ctl bits from vmcb02 to vmcb01:
+	 *
+	 * V_IRQ, V_IRQ_VECTOR, V_INTR_PRIO_MASK, V_IGN_TPR:
+	 * If the L1 doesn't intercept interrupts, then
+	 * (even if the L1 does use virtual interrupt masking),
+	 * KVM will use the vmcb02's V_INTR to detect interrupt window.
+	 *
+	 * In this case, the KVM raises KVM_REQ_EVENT to ensure that interrupt
+	 * window is not lost and KVM implicitly V_IRQ bit from vmcb02 to vmcb01
+	 *
+	 * V_TPR:
+	 * If the L1 doesn't use virtual interrupt masking, then the L1's vTPR
+	 * is stored in the vmcb02 but its value doesn't need to be copied
+	 * from/to vmcb01 because it is copied from/to the TPR APIC's register
+	 * on each VM entry/exit.
+	 *
+	 * V_GIF:
+	 * If the nested vGIF is not used, KVM uses vmcb02's V_GIF for L1's
+	 * V_GIF, however, the L1 vGIF is reset to false on each VM exit, thus
+	 * there is no need to copy it from vmcb02 to vmcb01.
+	 */
+
+	if (!nested_exit_on_intr(svm))
+		kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+
 	if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) {
 		svm_copy_lbrs(vmcb12, vmcb02);
 		svm_update_lbrv(vcpu);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 04/11] KVM: SVM: add wrappers to enable/disable IRET interception
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (2 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 03/11] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 05/11] KVM: x86: Raise an event request when processing NMIs if an NMI is pending Santosh Shukla
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Maxim Levitsky <mlevitsk@redhat.com>

SEV-ES guests don't use IRET interception for the detection of
an end of a NMI.

Therefore it makes sense to create a wrapper to avoid repeating
the check for the SEV-ES.

No functional change is intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
[Renamed iret intercept API of style svm_{clr,set}_iret_intercept()]
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---
v3:
- renamed iret_intercept API
https://lore.kernel.org/all/a5d8307b-ffe6-df62-5e22-dffd19755baa@amd.com/

 arch/x86/kvm/svm/svm.c | 28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cf6ae093ed19..da936723e8ca 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2490,16 +2490,29 @@ static int task_switch_interception(struct kvm_vcpu *vcpu)
 			       has_error_code, error_code);
 }
 
+static void svm_clr_iret_intercept(struct vcpu_svm *svm)
+{
+	if (!sev_es_guest(svm->vcpu.kvm))
+		svm_clr_intercept(svm, INTERCEPT_IRET);
+}
+
+static void svm_set_iret_intercept(struct vcpu_svm *svm)
+{
+	if (!sev_es_guest(svm->vcpu.kvm))
+		svm_set_intercept(svm, INTERCEPT_IRET);
+}
+
 static int iret_interception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	++vcpu->stat.nmi_window_exits;
 	svm->awaiting_iret_completion = true;
-	if (!sev_es_guest(vcpu->kvm)) {
-		svm_clr_intercept(svm, INTERCEPT_IRET);
+
+	svm_clr_iret_intercept(svm);
+	if (!sev_es_guest(vcpu->kvm))
 		svm->nmi_iret_rip = kvm_rip_read(vcpu);
-	}
+
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	return 1;
 }
@@ -3491,8 +3504,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
 		return;
 
 	svm->nmi_masked = true;
-	if (!sev_es_guest(vcpu->kvm))
-		svm_set_intercept(svm, INTERCEPT_IRET);
+	svm_set_iret_intercept(svm);
 	++vcpu->stat.nmi_injections;
 }
 
@@ -3632,12 +3644,10 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 
 	if (masked) {
 		svm->nmi_masked = true;
-		if (!sev_es_guest(vcpu->kvm))
-			svm_set_intercept(svm, INTERCEPT_IRET);
+		svm_set_iret_intercept(svm);
 	} else {
 		svm->nmi_masked = false;
-		if (!sev_es_guest(vcpu->kvm))
-			svm_clr_intercept(svm, INTERCEPT_IRET);
+		svm_clr_iret_intercept(svm);
 	}
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 05/11] KVM: x86: Raise an event request when processing NMIs if an NMI is pending
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (3 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 04/11] KVM: SVM: add wrappers to enable/disable IRET interception Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 06/11] KVM: x86: Tweak the code and comment related to handling concurrent NMIs Santosh Shukla
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Sean Christopherson <seanjc@google.com>

Don't raise KVM_REQ_EVENT if no NMIs are pending at the end of
process_nmi().  Finishing process_nmi() without a pending NMI will become
much more likely when KVM gains support for AMD's vNMI, which allows
pending vNMIs in hardware, i.e. doesn't require explicit injection.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---
 arch/x86/kvm/x86.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f706621c35b8..1cd9cadc82af 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10148,7 +10148,9 @@ static void process_nmi(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
 	vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
-	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
+	if (vcpu->arch.nmi_pending)
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
 
 void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 06/11] KVM: x86: Tweak the code and comment related to handling concurrent NMIs
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (4 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 05/11] KVM: x86: Raise an event request when processing NMIs if an NMI is pending Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 07/11] KVM: x86: Save/restore all NMIs when multiple NMIs are pending Santosh Shukla
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Sean Christopherson <seanjc@google.com>

Tweak the code and comment that deals with concurrent NMIs to explicitly
call out that x86 allows exactly one pending NMI, but that KVM needs to
temporarily allow two pending NMIs in order to workaround the fact that
the target vCPU cannot immediately recognize an incoming NMI, unlike bare
metal.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---
v3:
https://lore.kernel.org/all/Y9mtGV+q0P2U9+M1@google.com/
from Sean comment.

 arch/x86/kvm/x86.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1cd9cadc82af..16590e094899 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10136,15 +10136,22 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
 
 static void process_nmi(struct kvm_vcpu *vcpu)
 {
-	unsigned limit = 2;
+	unsigned int limit;
 
 	/*
-	 * x86 is limited to one NMI running, and one NMI pending after it.
-	 * If an NMI is already in progress, limit further NMIs to just one.
-	 * Otherwise, allow two (and we'll inject the first one immediately).
+	 * x86 is limited to one NMI pending, but because KVM can't react to
+	 * incoming NMIs as quickly as bare metal, e.g. if the vCPU is
+	 * scheduled out, KVM needs to play nice with two queued NMIs showing
+	 * up at the same time.  To handle this scenario, allow two NMIs to be
+	 * (temporarily) pending so long as NMIs are not blocked and KVM is not
+	 * waiting for a previous NMI injection to complete (which effectively
+	 * blocks NMIs).  KVM will immediately inject one of the two NMIs, and
+	 * will request an NMI window to handle the second NMI.
 	 */
 	if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected)
 		limit = 1;
+	else
+		limit = 2;
 
 	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
 	vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 07/11] KVM: x86: Save/restore all NMIs when multiple NMIs are pending
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (5 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 06/11] KVM: x86: Tweak the code and comment related to handling concurrent NMIs Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-02-27  8:40 ` [PATCHv4 08/11] x86/cpu: Add CPUID feature bit for VNMI Santosh Shukla
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

From: Sean Christopherson <seanjc@google.com>

Save all pending NMIs in KVM_GET_VCPU_EVENTS, and queue KVM_REQ_NMI if one
or more NMIs are pending after KVM_SET_VCPU_EVENTS in order to re-evaluate
pending NMIs with respect to NMI blocking.

KVM allows multiple NMIs to be pending in order to faithfully emulate bare
metal handling of simultaneous NMIs (on bare metal, truly simultaneous
NMIs are impossible, i.e. one will always arrive first and be consumed).
Support for simultaneous NMIs botched the save/restore though.  KVM only
saves one pending NMI, but allows userspace to restore 255 pending NMIs
as kvm_vcpu_events.nmi.pending is a u8, and KVM's internal state is stored
in an unsigned int.

Fixes: 7460fb4a3400 ("KVM: Fix simultaneous NMIs")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
---

v3:
- There is checkpatch warning about the Fixes tag like below

 WARNING: Unknown commit id '7460fb4a3400', maybe rebased or not pulled?
 #19:
 Fixes: 7460fb4a3400 ("KVM: Fix simultaneous NMIs")

 total: 0 errors, 1 warnings, 20 lines checked

Although this patch is part of kernel v3.2 onwards

 arch/x86/kvm/x86.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 16590e094899..b22074f467e0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5113,7 +5113,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
 
 	events->nmi.injected = vcpu->arch.nmi_injected;
-	events->nmi.pending = vcpu->arch.nmi_pending != 0;
+	events->nmi.pending = vcpu->arch.nmi_pending;
 	events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);
 
 	/* events->sipi_vector is never valid when reporting to user space */
@@ -5200,8 +5200,11 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 						events->interrupt.shadow);
 
 	vcpu->arch.nmi_injected = events->nmi.injected;
-	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
+	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
 		vcpu->arch.nmi_pending = events->nmi.pending;
+		if (vcpu->arch.nmi_pending)
+			kvm_make_request(KVM_REQ_NMI, vcpu);
+	}
 	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
 
 	if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR &&
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 08/11] x86/cpu: Add CPUID feature bit for VNMI
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (6 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 07/11] KVM: x86: Save/restore all NMIs when multiple NMIs are pending Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-03-22 19:07   ` Sean Christopherson
  2023-02-27  8:40 ` [PATCHv4 09/11] KVM: SVM: Add VNMI bit definition Santosh Shukla
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

VNMI feature allows the hypervisor to inject NMI into the guest w/o
using Event injection mechanism, The benefit of using VNMI over the
event Injection that does not require tracking the Guest's NMI state and
intercepting the IRET for the NMI completion. VNMI achieves that by
exposing 3 capability bits in VMCB intr_cntrl which helps with
virtualizing NMI injection and NMI_Masking.

The presence of this feature is indicated via the CPUID function
0x8000000A_EDX[25].

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Santosh Shukla <santosh.shukla@amd.com>
---
 arch/x86/include/asm/cpufeatures.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index cdb7e1492311..b3ae49f36008 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -365,6 +365,7 @@
 #define X86_FEATURE_VGIF		(15*32+16) /* Virtual GIF */
 #define X86_FEATURE_X2AVIC		(15*32+18) /* Virtual x2apic */
 #define X86_FEATURE_V_SPEC_CTRL		(15*32+20) /* Virtual SPEC_CTRL */
+#define X86_FEATURE_AMD_VNMI		(15*32+25) /* Virtual NMI */
 #define X86_FEATURE_SVME_ADDR_CHK	(15*32+28) /* "" SVME addr check */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 09/11] KVM: SVM: Add VNMI bit definition
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (7 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 08/11] x86/cpu: Add CPUID feature bit for VNMI Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-03-23  0:54   ` Sean Christopherson
  2023-02-27  8:40 ` [PATCHv4 10/11] KVM: x86: add support for delayed virtual NMI injection interface Santosh Shukla
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

VNMI exposes 3 capability bits (V_NMI, V_NMI_MASK, and V_NMI_ENABLE) to
virtualize NMI and NMI_MASK, Those capability bits are part of
VMCB::intr_ctrl -
V_NMI_PENDING_MASK(11) - Indicates whether a virtual NMI is pending in the
guest.
V_NMI_BLOCKING_MASK(12) - Indicates whether virtual NMI is masked in the
guest.
V_NMI_ENABLE_MASK(26) - Enables the NMI virtualization feature for the
guest.

When Hypervisor wants to inject NMI, it will set V_NMI bit, Processor
will clear the V_NMI bit and Set the V_NMI_MASK which means the Guest is
handling NMI, After the guest handled the NMI, The processor will clear
the V_NMI_MASK on the successful completion of IRET instruction Or if
VMEXIT occurs while delivering the virtual NMI.

To enable the VNMI capability, Hypervisor need to program
V_NMI_ENABLE_MASK bit 1.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Santosh Shukla <santosh.shukla@amd.com>
---
v3:
- Renamed V_NMI bits per Sean's v2 comment for
  better readability.
https://lore.kernel.org/all/66f93354-22b1-a2aa-f64c-6e70b9b8063c@amd.com/

 arch/x86/include/asm/svm.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index cb1ee53ad3b1..9691081d9231 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -183,6 +183,12 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define V_GIF_SHIFT 9
 #define V_GIF_MASK (1 << V_GIF_SHIFT)
 
+#define V_NMI_PENDING_SHIFT 11
+#define V_NMI_PENDING_MASK (1 << V_NMI_PENDING_SHIFT)
+
+#define V_NMI_BLOCKING_SHIFT 12
+#define V_NMI_BLOCKING_MASK (1 << V_NMI_BLOCKING_SHIFT)
+
 #define V_INTR_PRIO_SHIFT 16
 #define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
 
@@ -197,6 +203,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define V_GIF_ENABLE_SHIFT 25
 #define V_GIF_ENABLE_MASK (1 << V_GIF_ENABLE_SHIFT)
 
+#define V_NMI_ENABLE_SHIFT 26
+#define V_NMI_ENABLE_MASK (1 << V_NMI_ENABLE_SHIFT)
+
 #define AVIC_ENABLE_SHIFT 31
 #define AVIC_ENABLE_MASK (1 << AVIC_ENABLE_SHIFT)
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 10/11] KVM: x86: add support for delayed virtual NMI injection interface
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (8 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 09/11] KVM: SVM: Add VNMI bit definition Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-03-23  0:49   ` Sean Christopherson
  2023-02-27  8:40 ` [PATCHv4 11/11] KVM: nSVM: implement support for nested VNMI Santosh Shukla
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

Introducing two new vendor callbacks so to support virtual NMI
injection example vNMI feature for SVM.

- kvm_x86_is_vnmi_pending()
- kvm_x86_set_vnmi_pending()

Using those callbacks the KVM can take advantage of the
hardware's accelerated delayed NMI delivery (currently vNMI on SVM).

Once NMI is set to pending via this interface, it is assumed that
the hardware will deliver the NMI on its own to the guest once
all the x86 conditions for the NMI delivery are met.

Note that the 'kvm_x86_set_vnmi_pending()' callback is allowed
to fail, in which case a normal NMI injection will be attempted
when NMI can be delivered (possibly by using a NMI window).

With vNMI that can happen either if vNMI is already pending or
if a nested guest is running.

When the vNMI injection fails due to the 'vNMI is already pending'
condition, the new NMI will be dropped unless the new NMI can be
injected immediately, so no NMI window will be requested.

Use '.kvm_x86_set_hw_nmi_pending' method to inject the
pending NMIs for AMD's VNMI feature.

Note that vNMI doesn't need nmi_window_existing feature to
pend the new virtual NMI and that KVM will now be able to
detect with flag (KVM_VCPUEVENT_VALID_NMI_PENDING) and pend
the new NMI by raising KVM_REQ_NMI event.

Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
Co-developed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
v3:
 - Fixed SOB
 - Merged V_NMI implementation with x86_ops delayed NMI
   API proposal for better readablity.
 - Added early WARN_ON for VNMI case in svm_enable_nmi_window.
 - Indentation and style fixes per v2 comment.
 - Removed `svm->nmi_masked` check from svm_enable_nmi_window
   and replaced with svm_get_nmi_mask().
 - Note that I am keeping kvm_get_total_nmi_pending() logic
   like v2.. since `events->nmi.pending` is u8 not a boolean.
https://lore.kernel.org/all/Y9mwz%2FG6+G8NSX3+@google.com/

 arch/x86/include/asm/kvm-x86-ops.h |   2 +
 arch/x86/include/asm/kvm_host.h    |  11 ++-
 arch/x86/kvm/svm/svm.c             | 113 +++++++++++++++++++++++------
 arch/x86/kvm/svm/svm.h             |  22 ++++++
 arch/x86/kvm/x86.c                 |  26 ++++++-
 5 files changed, 147 insertions(+), 27 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 8dc345cc6318..092ef2398857 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -68,6 +68,8 @@ KVM_X86_OP(get_interrupt_shadow)
 KVM_X86_OP(patch_hypercall)
 KVM_X86_OP(inject_irq)
 KVM_X86_OP(inject_nmi)
+KVM_X86_OP_OPTIONAL_RET0(is_vnmi_pending)
+KVM_X86_OP_OPTIONAL_RET0(set_vnmi_pending)
 KVM_X86_OP(inject_exception)
 KVM_X86_OP(cancel_injection)
 KVM_X86_OP(interrupt_allowed)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 792a6037047a..f8a44c6c8633 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -878,7 +878,11 @@ struct kvm_vcpu_arch {
 	u64 tsc_scaling_ratio; /* current scaling ratio */
 
 	atomic_t nmi_queued;  /* unprocessed asynchronous NMIs */
-	unsigned nmi_pending; /* NMI queued after currently running handler */
+	/*
+	 * NMI queued after currently running handler
+	 * (not including a hardware pending NMI (e.g vNMI))
+	 */
+	unsigned int nmi_pending;
 	bool nmi_injected;    /* Trying to inject an NMI this entry */
 	bool smi_pending;    /* SMI queued after currently running handler */
 	u8 handling_intr_from_guest;
@@ -1640,6 +1644,10 @@ struct kvm_x86_ops {
 	int (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
 	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
+	/* returns true, if a NMI is pending injection on hardware level (e.g vNMI) */
+	bool (*is_vnmi_pending)(struct kvm_vcpu *vcpu);
+	/* attempts make a NMI pending via hardware interface (e.g vNMI) */
+	bool (*set_vnmi_pending)(struct kvm_vcpu *vcpu);
 	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
 	void (*enable_irq_window)(struct kvm_vcpu *vcpu);
 	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
@@ -2004,6 +2012,7 @@ int kvm_pic_set_irq(struct kvm_pic *pic, int irq, int irq_source_id, int level);
 void kvm_pic_clear_all(struct kvm_pic *pic, int irq_source_id);
 
 void kvm_inject_nmi(struct kvm_vcpu *vcpu);
+int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu);
 
 void kvm_update_dr7(struct kvm_vcpu *vcpu);
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index da936723e8ca..84d9d2566629 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -230,6 +230,8 @@ module_param(dump_invalid_vmcb, bool, 0644);
 bool intercept_smi = true;
 module_param(intercept_smi, bool, 0444);
 
+bool vnmi = true;
+module_param(vnmi, bool, 0444);
 
 static bool svm_gp_erratum_intercept = true;
 
@@ -1311,6 +1313,9 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
 	if (kvm_vcpu_apicv_active(vcpu))
 		avic_init_vmcb(svm, vmcb);
 
+	if (vnmi)
+		svm->vmcb->control.int_ctl |= V_NMI_ENABLE_MASK;
+
 	if (vgif) {
 		svm_clr_intercept(svm, INTERCEPT_STGI);
 		svm_clr_intercept(svm, INTERCEPT_CLGI);
@@ -3508,6 +3513,38 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
 	++vcpu->stat.nmi_injections;
 }
 
+static bool svm_is_vnmi_pending(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (!is_vnmi_enabled(svm))
+		return false;
+
+	return !!(svm->vmcb->control.int_ctl & V_NMI_BLOCKING_MASK);
+}
+
+static bool svm_set_vnmi_pending(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (!is_vnmi_enabled(svm))
+		return false;
+
+	if (svm->vmcb->control.int_ctl & V_NMI_PENDING_MASK)
+		return false;
+
+	svm->vmcb->control.int_ctl |= V_NMI_PENDING_MASK;
+	vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
+
+	/*
+	 * NMI isn't yet technically injected but
+	 * this rough estimation should be good enough
+	 */
+	++vcpu->stat.nmi_injections;
+
+	return true;
+}
+
 static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3603,6 +3640,35 @@ static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 		svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
 }
 
+static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (is_vnmi_enabled(svm))
+		return svm->vmcb->control.int_ctl & V_NMI_BLOCKING_MASK;
+	else
+		return svm->nmi_masked;
+}
+
+static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (is_vnmi_enabled(svm)) {
+		if (masked)
+			svm->vmcb->control.int_ctl |= V_NMI_BLOCKING_MASK;
+		else
+			svm->vmcb->control.int_ctl &= ~V_NMI_BLOCKING_MASK;
+
+	} else {
+		svm->nmi_masked = masked;
+		if (masked)
+			svm_set_iret_intercept(svm);
+		else
+			svm_clr_iret_intercept(svm);
+	}
+}
+
 bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3614,8 +3680,10 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
 	if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
 		return false;
 
-	return (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) ||
-	       svm->nmi_masked;
+	if (svm_get_nmi_mask(vcpu))
+		return true;
+
+	return vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK;
 }
 
 static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
@@ -3633,24 +3701,6 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 	return 1;
 }
 
-static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
-{
-	return to_svm(vcpu)->nmi_masked;
-}
-
-static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
-{
-	struct vcpu_svm *svm = to_svm(vcpu);
-
-	if (masked) {
-		svm->nmi_masked = true;
-		svm_set_iret_intercept(svm);
-	} else {
-		svm->nmi_masked = false;
-		svm_clr_iret_intercept(svm);
-	}
-}
-
 bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3731,7 +3781,14 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
-	if (svm->nmi_masked && !svm->awaiting_iret_completion)
+	/*
+	 * NMI window not needed with vNMI enabled
+	 * and if reached here then better WARN and
+	 * continue to single step.
+	 */
+	WARN_ON_ONCE(is_vnmi_enabled(svm));
+
+	if (svm_get_nmi_mask(vcpu) && !svm->awaiting_iret_completion)
 		return; /* IRET will cause a vm exit */
 
 	if (!gif_set(svm)) {
@@ -3745,8 +3802,8 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
 	 * problem (IRET or exception injection or interrupt shadow)
 	 */
 	svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu);
-	svm->nmi_singlestep = true;
 	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
+	svm->nmi_singlestep = true;
 }
 
 static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
@@ -4780,6 +4837,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.patch_hypercall = svm_patch_hypercall,
 	.inject_irq = svm_inject_irq,
 	.inject_nmi = svm_inject_nmi,
+	.is_vnmi_pending = svm_is_vnmi_pending,
+	.set_vnmi_pending = svm_set_vnmi_pending,
 	.inject_exception = svm_inject_exception,
 	.cancel_injection = svm_cancel_injection,
 	.interrupt_allowed = svm_interrupt_allowed,
@@ -5070,6 +5129,16 @@ static __init int svm_hardware_setup(void)
 			pr_info("Virtual GIF supported\n");
 	}
 
+	vnmi = vgif && vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI);
+	if (vnmi)
+		pr_info("Virtual NMI enabled\n");
+
+	if (!vnmi) {
+		svm_x86_ops.is_vnmi_pending = NULL;
+		svm_x86_ops.set_vnmi_pending = NULL;
+	}
+
+
 	if (lbrv) {
 		if (!boot_cpu_has(X86_FEATURE_LBRV))
 			lbrv = false;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 839809972da1..fb48c347bbe0 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -36,6 +36,7 @@ extern bool npt_enabled;
 extern int vgif;
 extern bool intercept_smi;
 extern bool x2avic_enabled;
+extern bool vnmi;
 
 /*
  * Clean bits in VMCB.
@@ -548,6 +549,27 @@ static inline bool is_x2apic_msrpm_offset(u32 offset)
 	       (msr < (APIC_BASE_MSR + 0x100));
 }
 
+static inline struct vmcb *get_vnmi_vmcb_l1(struct vcpu_svm *svm)
+{
+	if (!vnmi)
+		return NULL;
+
+	if (is_guest_mode(&svm->vcpu))
+		return NULL;
+	else
+		return svm->vmcb01.ptr;
+}
+
+static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
+{
+	struct vmcb *vmcb = get_vnmi_vmcb_l1(svm);
+
+	if (vmcb)
+		return !!(vmcb->control.int_ctl & V_NMI_ENABLE_MASK);
+	else
+		return false;
+}
+
 /* svm.c */
 #define MSR_INVALID				0xffffffffU
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b22074f467e0..b5354249fe00 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5113,7 +5113,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
 
 	events->nmi.injected = vcpu->arch.nmi_injected;
-	events->nmi.pending = vcpu->arch.nmi_pending;
+	events->nmi.pending = kvm_get_total_nmi_pending(vcpu);
 	events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);
 
 	/* events->sipi_vector is never valid when reporting to user space */
@@ -5201,9 +5201,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 
 	vcpu->arch.nmi_injected = events->nmi.injected;
 	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
-		vcpu->arch.nmi_pending = events->nmi.pending;
-		if (vcpu->arch.nmi_pending)
-			kvm_make_request(KVM_REQ_NMI, vcpu);
+		vcpu->arch.nmi_pending = 0;
+		atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending);
+		kvm_make_request(KVM_REQ_NMI, vcpu);
 	}
 	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
 
@@ -10156,13 +10156,31 @@ static void process_nmi(struct kvm_vcpu *vcpu)
 	else
 		limit = 2;
 
+	/*
+	 * Adjust the limit to account for pending virtual NMIs, which aren't
+	 * tracked in vcpu->arch.nmi_pending.
+	 */
+	if (static_call(kvm_x86_is_vnmi_pending)(vcpu))
+		limit--;
+
 	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
 	vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
 
+	if (vcpu->arch.nmi_pending &&
+	    (static_call(kvm_x86_set_vnmi_pending)(vcpu)))
+		vcpu->arch.nmi_pending--;
+
 	if (vcpu->arch.nmi_pending)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
 
+/* Return total number of NMIs pending injection to the VM */
+int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.nmi_pending + static_call(kvm_x86_is_vnmi_pending)(vcpu);
+}
+
+
 void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
 				       unsigned long *vcpu_bitmap)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCHv4 11/11] KVM: nSVM: implement support for nested VNMI
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (9 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 10/11] KVM: x86: add support for delayed virtual NMI injection interface Santosh Shukla
@ 2023-02-27  8:40 ` Santosh Shukla
  2023-03-23  0:50   ` Sean Christopherson
  2023-03-10  9:19 ` [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Santosh Shukla @ 2023-02-27  8:40 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

Allows L1 to use vNMI to accelerate its injection of NMI
to L2 by passing through vNMI int_ctl bits from vmcb12
to/from vmcb02.

In case of L1 and L2 both using VNMI- Copy VNMI bits from vmcb12 to
vmcb02 during entry and vice-versa during exit.
And in case of L1 uses VNMI and L2 doesn't- Copy VNMI bits from vmcb01 to
vmcb02 during entry and vice-versa during exit.

Tested with the KVM-unit-test and Nested Guest scenario.

Co-developed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Santosh Shukla <santosh.shukla@amd.com>
---
v3:
- Fix identiation and style issue.
- Fix SOB
- Removed `svm->nmi_masked` var use for nested svm case.
- Reworded the commit description.
https://lore.kernel.org/all/Y9m15P8xQ2dxvIzd@google.com/

 arch/x86/kvm/svm/nested.c | 33 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c    |  5 +++++
 arch/x86/kvm/svm/svm.h    |  6 ++++++
 3 files changed, 44 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 74e9e9e76d77..b018fe2fdf88 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -281,6 +281,11 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
 	if (CC(!nested_svm_check_tlb_ctl(vcpu, control->tlb_ctl)))
 		return false;
 
+	if (CC((control->int_ctl & V_NMI_ENABLE_MASK) &&
+	       !vmcb12_is_intercept(control, INTERCEPT_NMI))) {
+		return false;
+	}
+
 	return true;
 }
 
@@ -436,6 +441,9 @@ void nested_sync_control_from_vmcb02(struct vcpu_svm *svm)
 	if (nested_vgif_enabled(svm))
 		mask |= V_GIF_MASK;
 
+	if (nested_vnmi_enabled(svm))
+		mask |= V_NMI_BLOCKING_MASK | V_NMI_PENDING_MASK;
+
 	svm->nested.ctl.int_ctl        &= ~mask;
 	svm->nested.ctl.int_ctl        |= svm->vmcb->control.int_ctl & mask;
 }
@@ -655,6 +663,17 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
 	else
 		int_ctl_vmcb01_bits |= (V_GIF_MASK | V_GIF_ENABLE_MASK);
 
+	if (vnmi) {
+		if (vmcb01->control.int_ctl & V_NMI_PENDING_MASK) {
+			svm->vcpu.arch.nmi_pending++;
+			kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+		}
+		if (nested_vnmi_enabled(svm))
+			int_ctl_vmcb12_bits |= (V_NMI_PENDING_MASK |
+						V_NMI_ENABLE_MASK |
+						V_NMI_BLOCKING_MASK);
+	}
+
 	/* Copied from vmcb01.  msrpm_base can be overwritten later.  */
 	vmcb02->control.nested_ctl = vmcb01->control.nested_ctl;
 	vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
@@ -1058,6 +1077,20 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 		svm_update_lbrv(vcpu);
 	}
 
+	if (vnmi) {
+		if (vmcb02->control.int_ctl & V_NMI_BLOCKING_MASK)
+			vmcb01->control.int_ctl |= V_NMI_BLOCKING_MASK;
+		else
+			vmcb01->control.int_ctl &= ~V_NMI_BLOCKING_MASK;
+
+		if (vcpu->arch.nmi_pending) {
+			vcpu->arch.nmi_pending--;
+			vmcb01->control.int_ctl |= V_NMI_PENDING_MASK;
+		} else
+			vmcb01->control.int_ctl &= ~V_NMI_PENDING_MASK;
+
+	}
+
 	/*
 	 * On vmexit the  GIF is set to false and
 	 * no event can be injected in L1.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 84d9d2566629..08b7856e2da2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4226,6 +4226,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 
 	svm->vgif_enabled = vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF);
 
+	svm->vnmi_enabled = vnmi && guest_cpuid_has(vcpu, X86_FEATURE_AMD_VNMI);
+
 	svm_recalc_instruction_intercepts(vcpu, svm);
 
 	/* For sev guests, the memory encryption bit is not reserved in CR3.  */
@@ -4981,6 +4983,9 @@ static __init void svm_set_cpu_caps(void)
 		if (vgif)
 			kvm_cpu_cap_set(X86_FEATURE_VGIF);
 
+		if (vnmi)
+			kvm_cpu_cap_set(X86_FEATURE_AMD_VNMI);
+
 		/* Nested VM can receive #VMEXIT instead of triggering #GP */
 		kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK);
 	}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index fb48c347bbe0..e229eadbf1ce 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -266,6 +266,7 @@ struct vcpu_svm {
 	bool pause_filter_enabled         : 1;
 	bool pause_threshold_enabled      : 1;
 	bool vgif_enabled                 : 1;
+	bool vnmi_enabled                 : 1;
 
 	u32 ldr_reg;
 	u32 dfr_reg;
@@ -540,6 +541,11 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
 	return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE;
 }
 
+static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
+{
+	return svm->vnmi_enabled && (svm->nested.ctl.int_ctl & V_NMI_ENABLE_MASK);
+}
+
 static inline bool is_x2apic_msrpm_offset(u32 offset)
 {
 	/* 4 msrs per u8, and 4 u8 in u32 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 00/11] SVM: virtual NMI
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (10 preceding siblings ...)
  2023-02-27  8:40 ` [PATCHv4 11/11] KVM: nSVM: implement support for nested VNMI Santosh Shukla
@ 2023-03-10  9:19 ` Santosh Shukla
  2023-03-10 17:02   ` Sean Christopherson
  2023-03-23  0:57 ` Sean Christopherson
  2023-03-23 22:53 ` Sean Christopherson
  13 siblings, 1 reply; 22+ messages in thread
From: Santosh Shukla @ 2023-03-10  9:19 UTC (permalink / raw)
  To: kvm, seanjc
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets



On 2/27/2023 2:10 PM, Santosh Shukla wrote:
> 
> v2:
> https://lore.kernel.org/all/0f56e139-4c7f-5220-a4a2-99f87f45fd83@amd.com/
> 
> v3:
> https://lore.kernel.org/all/20230227035400.1498-1-santosh.shukla@amd.com/
>  - 09/11: Clubbed x86_ops delayed NMI with vNMI changes into one,
>    for better readability purpose (Sean Suggestion)
>  - Series includes suggestion and fixes proposed in v2 series.
>    Refer each patch for change history(v2-->v3).
> 
> v4:
>  - Missed sending 01/11 patch in v3.
> 
> Series based on [1] and tested on AMD EPYC-Genoa.
> 
> 
> APM: ((Ch-15.21.10 - NMI Virtualization)
> https://www.amd.com/en/support/tech-docs/amd64-architecture-programmers-manual-volumes-1-5
> 
> Past history and work refer v5-
> https://lkml.org/lkml/2022/10/27/261
> 
> Thanks,
> Santosh
> [1] https://github.com/kvm-x86/linux branch kvm-x86/next(62ef199250cd46f)
> 
> 

Gentle Ping?

Thanks,
Santosh


> 
> Maxim Levitsky (2):
>   KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
>   KVM: SVM: add wrappers to enable/disable IRET interception
> 
> Santosh Shukla (6):
>   KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is
>     intercepting VINTR
>   KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
>   x86/cpu: Add CPUID feature bit for VNMI
>   KVM: SVM: Add VNMI bit definition
>   KVM: x86: add support for delayed virtual NMI injection interface
>   KVM: nSVM: implement support for nested VNMI
> 
> Sean Christopherson (3):
>   KVM: x86: Raise an event request when processing NMIs if an NMI is
>     pending
>   KVM: x86: Tweak the code and comment related to handling concurrent
>     NMIs
>   KVM: x86: Save/restore all NMIs when multiple NMIs are pending
> 
>  arch/x86/include/asm/cpufeatures.h |   1 +
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |  11 ++-
>  arch/x86/include/asm/svm.h         |   9 ++
>  arch/x86/kvm/svm/nested.c          |  94 +++++++++++++++---
>  arch/x86/kvm/svm/svm.c             | 152 +++++++++++++++++++++++------
>  arch/x86/kvm/svm/svm.h             |  28 ++++++
>  arch/x86/kvm/x86.c                 |  46 +++++++--
>  8 files changed, 289 insertions(+), 54 deletions(-)
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 00/11] SVM: virtual NMI
  2023-03-10  9:19 ` [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
@ 2023-03-10 17:02   ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2023-03-10 17:02 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Fri, Mar 10, 2023, Santosh Shukla wrote:
> Gentle Ping?

I'm slowly working my into review mode for 6.4.  This is very much on my todo list.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 08/11] x86/cpu: Add CPUID feature bit for VNMI
  2023-02-27  8:40 ` [PATCHv4 08/11] x86/cpu: Add CPUID feature bit for VNMI Santosh Shukla
@ 2023-03-22 19:07   ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2023-03-22 19:07 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Mon, Feb 27, 2023, Santosh Shukla wrote:
> VNMI feature allows the hypervisor to inject NMI into the guest w/o
> using Event injection mechanism, The benefit of using VNMI over the
> event Injection that does not require tracking the Guest's NMI state and
> intercepting the IRET for the NMI completion. VNMI achieves that by
> exposing 3 capability bits in VMCB intr_cntrl which helps with
> virtualizing NMI injection and NMI_Masking.
> 
> The presence of this feature is indicated via the CPUID function
> 0x8000000A_EDX[25].
> 
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Santosh Shukla <santosh.shukla@amd.com>
> ---
>  arch/x86/include/asm/cpufeatures.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index cdb7e1492311..b3ae49f36008 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -365,6 +365,7 @@
>  #define X86_FEATURE_VGIF		(15*32+16) /* Virtual GIF */
>  #define X86_FEATURE_X2AVIC		(15*32+18) /* Virtual x2apic */
>  #define X86_FEATURE_V_SPEC_CTRL		(15*32+20) /* Virtual SPEC_CTRL */
> +#define X86_FEATURE_AMD_VNMI		(15*32+25) /* Virtual NMI */

Rather than carry VNMI and AMD_VNMI, what if we redefine VNMI to use AMD's real
CPUID bit?  The synthetic flag exists purely so that the converion to VMX feature
flags didn't break /proc/cpuinfo.  X86_FEATURE_VNMI isn't consumed by the kernel,
and if that changes, having a common flag might actually be a good thing, e.g.
would allow common KVM code to query vNMI support without needing VMX vs. SVM
hooks.

I.e. drop this in

From: Sean Christopherson <seanjc@google.com>
Date: Wed, 22 Mar 2023 11:33:08 -0700
Subject: [PATCH] x86/cpufeatures: Redefine synthetic virtual NMI bit as AMD's
 "real" vNMI

The existing X86_FEATURE_VNMI is a synthetic feature flag that exists
purely to maintain /proc/cpuinfo's ABI, the "real" Intel vNMI feature flag
is tracked as VMX_FEATURE_VIRTUAL_NMIS, as the feature is enumerated
through VMX MSRs, not CPUID.

AMD is also gaining virtual NMI support, but in true VMX vs. SVM form,
enumerates support through CPUID, i.e. wants to add real feature flag for
vNMI.

Redefine the syntheic X86_FEATURE_VNMI to AMD's real CPUID bit to avoid
having both X86_FEATURE_VNMI and e.g. X86_FEATURE_AMD_VNMI.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/cpufeatures.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 73c9672c123b..ced9e1832589 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -226,10 +226,9 @@
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
-#define X86_FEATURE_VNMI		( 8*32+ 1) /* Intel Virtual NMI */
-#define X86_FEATURE_FLEXPRIORITY	( 8*32+ 2) /* Intel FlexPriority */
-#define X86_FEATURE_EPT			( 8*32+ 3) /* Intel Extended Page Table */
-#define X86_FEATURE_VPID		( 8*32+ 4) /* Intel Virtual Processor ID */
+#define X86_FEATURE_FLEXPRIORITY	( 8*32+ 1) /* Intel FlexPriority */
+#define X86_FEATURE_EPT			( 8*32+ 2) /* Intel Extended Page Table */
+#define X86_FEATURE_VPID		( 8*32+ 3) /* Intel Virtual Processor ID */
 
 #define X86_FEATURE_VMMCALL		( 8*32+15) /* Prefer VMMCALL to VMCALL */
 #define X86_FEATURE_XENPV		( 8*32+16) /* "" Xen paravirtual guest */
@@ -369,6 +368,7 @@
 #define X86_FEATURE_VGIF		(15*32+16) /* Virtual GIF */
 #define X86_FEATURE_X2AVIC		(15*32+18) /* Virtual x2apic */
 #define X86_FEATURE_V_SPEC_CTRL		(15*32+20) /* Virtual SPEC_CTRL */
+#define X86_FEATURE_VNMI		(15*32+25) /* Virtual NMI */
 #define X86_FEATURE_SVME_ADDR_CHK	(15*32+28) /* "" SVME addr check */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */

base-commit: a3af52e7c9d801f5d7c1fcf5679aaf48c33b6e88
-- 

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 10/11] KVM: x86: add support for delayed virtual NMI injection interface
  2023-02-27  8:40 ` [PATCHv4 10/11] KVM: x86: add support for delayed virtual NMI injection interface Santosh Shukla
@ 2023-03-23  0:49   ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2023-03-23  0:49 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

Please take the time to update shortlogs, changelogs, and comments when spinning
a new version.  This patch does waaaaay more than just "add support for delayed
virtual NMI injection interface", and the changelog isn't any better.   As you
can probably deduce from the nearly one month delay in me reviewing this version,
churning out a new version as quickly as possible is slower overall than taking
the time to make each version as solid as possible.

I'll fix things up this time.  I don't mind that much in this case because the
vNMI stuff is rather subtle and it reworking changelogs+comments was a good way
to review the code.  But in the future please take the time to fine tune the
entire patch, not just the code.

On Mon, Feb 27, 2023, Santosh Shukla wrote:
> Introducing two new vendor callbacks so to support virtual NMI
> injection example vNMI feature for SVM.
> 
> - kvm_x86_is_vnmi_pending()
> - kvm_x86_set_vnmi_pending()
> 
> Using those callbacks the KVM can take advantage of the
> hardware's accelerated delayed NMI delivery (currently vNMI on SVM).
> 
> Once NMI is set to pending via this interface, it is assumed that

State what the hardware does, not what it is assumed to do.  Hardware behavior
must be an immutable truth as far as KVM is concerned.

> the hardware will deliver the NMI on its own to the guest once
> all the x86 conditions for the NMI delivery are met.
> 
> Note that the 'kvm_x86_set_vnmi_pending()' callback is allowed
> to fail, in which case a normal NMI injection will be attempted
> when NMI can be delivered (possibly by using a NMI window).

Leading with "possibly by using an NMI window" and then contradicting that a few
sentences later is really confusing.

> With vNMI that can happen either if vNMI is already pending or
> if a nested guest is running.
> 
> When the vNMI injection fails due to the 'vNMI is already pending'
> condition, the new NMI will be dropped unless the new NMI can be
> injected immediately, so no NMI window will be requested.
> 
> Use '.kvm_x86_set_hw_nmi_pending' method to inject the

Stale reference.  Just delete this sentence, the role of the changelog is not to
give a play-by-play of the code.

> pending NMIs for AMD's VNMI feature.
> 
> Note that vNMI doesn't need nmi_window_existing feature to
> pend the new virtual NMI and that KVM will now be able to
> detect with flag (KVM_VCPUEVENT_VALID_NMI_PENDING) and pend
> the new NMI by raising KVM_REQ_NMI event.
> 
> Signed-off-by: Santosh Shukla <Santosh.Shukla@amd.com>
> Co-developed-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
> v3:
>  - Fixed SOB
>  - Merged V_NMI implementation with x86_ops delayed NMI
>    API proposal for better readablity.
>  - Added early WARN_ON for VNMI case in svm_enable_nmi_window.
>  - Indentation and style fixes per v2 comment.
>  - Removed `svm->nmi_masked` check from svm_enable_nmi_window
>    and replaced with svm_get_nmi_mask().
>  - Note that I am keeping kvm_get_total_nmi_pending() logic
>    like v2.. since `events->nmi.pending` is u8 not a boolean.
> https://lore.kernel.org/all/Y9mwz%2FG6+G8NSX3+@google.com/
> 
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |  11 ++-
>  arch/x86/kvm/svm/svm.c             | 113 +++++++++++++++++++++++------
>  arch/x86/kvm/svm/svm.h             |  22 ++++++
>  arch/x86/kvm/x86.c                 |  26 ++++++-
>  5 files changed, 147 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 8dc345cc6318..092ef2398857 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -68,6 +68,8 @@ KVM_X86_OP(get_interrupt_shadow)
>  KVM_X86_OP(patch_hypercall)
>  KVM_X86_OP(inject_irq)
>  KVM_X86_OP(inject_nmi)
> +KVM_X86_OP_OPTIONAL_RET0(is_vnmi_pending)
> +KVM_X86_OP_OPTIONAL_RET0(set_vnmi_pending)
>  KVM_X86_OP(inject_exception)
>  KVM_X86_OP(cancel_injection)
>  KVM_X86_OP(interrupt_allowed)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 792a6037047a..f8a44c6c8633 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -878,7 +878,11 @@ struct kvm_vcpu_arch {
>  	u64 tsc_scaling_ratio; /* current scaling ratio */
>  
>  	atomic_t nmi_queued;  /* unprocessed asynchronous NMIs */
> -	unsigned nmi_pending; /* NMI queued after currently running handler */
> +	/*
> +	 * NMI queued after currently running handler
> +	 * (not including a hardware pending NMI (e.g vNMI))
> +	 */
> +	unsigned int nmi_pending;
>  	bool nmi_injected;    /* Trying to inject an NMI this entry */
>  	bool smi_pending;    /* SMI queued after currently running handler */
>  	u8 handling_intr_from_guest;
> @@ -1640,6 +1644,10 @@ struct kvm_x86_ops {
>  	int (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
>  	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
>  	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
> +	/* returns true, if a NMI is pending injection on hardware level (e.g vNMI) */
> +	bool (*is_vnmi_pending)(struct kvm_vcpu *vcpu);
> +	/* attempts make a NMI pending via hardware interface (e.g vNMI) */

Expand this comment to justify/explain the use of a boolean return (static_call
RET0).

> +	bool (*set_vnmi_pending)(struct kvm_vcpu *vcpu);
>  	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
>  	void (*enable_irq_window)(struct kvm_vcpu *vcpu);
>  	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);

...

> @@ -3745,8 +3802,8 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
>  	 * problem (IRET or exception injection or interrupt shadow)
>  	 */
>  	svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu);
> -	svm->nmi_singlestep = true;
>  	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
> +	svm->nmi_singlestep = true;

Spurious change.

>  }
>  
>  static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
> @@ -4780,6 +4837,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>  	.patch_hypercall = svm_patch_hypercall,
>  	.inject_irq = svm_inject_irq,
>  	.inject_nmi = svm_inject_nmi,
> +	.is_vnmi_pending = svm_is_vnmi_pending,
> +	.set_vnmi_pending = svm_set_vnmi_pending,
>  	.inject_exception = svm_inject_exception,
>  	.cancel_injection = svm_cancel_injection,
>  	.interrupt_allowed = svm_interrupt_allowed,
> @@ -5070,6 +5129,16 @@ static __init int svm_hardware_setup(void)
>  			pr_info("Virtual GIF supported\n");
>  	}
>  
> +	vnmi = vgif && vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI);
> +	if (vnmi)
> +		pr_info("Virtual NMI enabled\n");
> +
> +	if (!vnmi) {
> +		svm_x86_ops.is_vnmi_pending = NULL;
> +		svm_x86_ops.set_vnmi_pending = NULL;
> +	}
> +
> +
>  	if (lbrv) {
>  		if (!boot_cpu_has(X86_FEATURE_LBRV))
>  			lbrv = false;
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 839809972da1..fb48c347bbe0 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -36,6 +36,7 @@ extern bool npt_enabled;
>  extern int vgif;
>  extern bool intercept_smi;
>  extern bool x2avic_enabled;
> +extern bool vnmi;
>  
>  /*
>   * Clean bits in VMCB.
> @@ -548,6 +549,27 @@ static inline bool is_x2apic_msrpm_offset(u32 offset)
>  	       (msr < (APIC_BASE_MSR + 0x100));
>  }
>  
> +static inline struct vmcb *get_vnmi_vmcb_l1(struct vcpu_svm *svm)
> +{
> +	if (!vnmi)
> +		return NULL;
> +
> +	if (is_guest_mode(&svm->vcpu))
> +		return NULL;
> +	else
> +		return svm->vmcb01.ptr;
> +}
> +
> +static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
> +{
> +	struct vmcb *vmcb = get_vnmi_vmcb_l1(svm);
> +
> +	if (vmcb)
> +		return !!(vmcb->control.int_ctl & V_NMI_ENABLE_MASK);
> +	else
> +		return false;
> +}
> +
>  /* svm.c */
>  #define MSR_INVALID				0xffffffffU
>  
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b22074f467e0..b5354249fe00 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5113,7 +5113,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
>  	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
>  
>  	events->nmi.injected = vcpu->arch.nmi_injected;
> -	events->nmi.pending = vcpu->arch.nmi_pending;
> +	events->nmi.pending = kvm_get_total_nmi_pending(vcpu);
>  	events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);
>  
>  	/* events->sipi_vector is never valid when reporting to user space */
> @@ -5201,9 +5201,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>  
>  	vcpu->arch.nmi_injected = events->nmi.injected;
>  	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
> -		vcpu->arch.nmi_pending = events->nmi.pending;
> -		if (vcpu->arch.nmi_pending)
> -			kvm_make_request(KVM_REQ_NMI, vcpu);
> +		vcpu->arch.nmi_pending = 0;
> +		atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending);
> +		kvm_make_request(KVM_REQ_NMI, vcpu);

I'm going to split this out to a separate patch.  I want to isolate this change
from vNMI support, and unlike the addition of the kvm_x86_ops hooks, it makes
sense as a standalone thing (at least, IMO it does :-) ).

>  	}
>  	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
>  

...

> +/* Return total number of NMIs pending injection to the VM */
> +int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu)

I much prefer kvm_get_nr_pending_nmis() to make it obvious that this returns a
number and that that number can be greater than 1.

> +{
> +	return vcpu->arch.nmi_pending + static_call(kvm_x86_is_vnmi_pending)(vcpu);
> +}
> +
> +
>  void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
>  				       unsigned long *vcpu_bitmap)
>  {
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 11/11] KVM: nSVM: implement support for nested VNMI
  2023-02-27  8:40 ` [PATCHv4 11/11] KVM: nSVM: implement support for nested VNMI Santosh Shukla
@ 2023-03-23  0:50   ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2023-03-23  0:50 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Mon, Feb 27, 2023, Santosh Shukla wrote:
> Allows L1 to use vNMI to accelerate its injection of NMI
> to L2 by passing through vNMI int_ctl bits from vmcb12
> to/from vmcb02.
> 
> In case of L1 and L2 both using VNMI- Copy VNMI bits from vmcb12 to
> vmcb02 during entry and vice-versa during exit.
> And in case of L1 uses VNMI and L2 doesn't- Copy VNMI bits from vmcb01 to
> vmcb02 during entry and vice-versa during exit.

This changelog is again stale, as it does not match the code.  Or maybe it never
matched the code.  The code looks correct though.

    KVM: nSVM: Implement support for nested VNMI
    
    Allow L1 to use vNMI to accelerate its injection of NMI to L2 by
    propagating vNMI int_ctl bits from/to vmcb12 to/from vmcb02.
    
    To handle both the case where vNMI is enabled for L1 and L2, and where
    vNMI is enabled for L1 but _not_ L2, move pending L1 vNMIs to nmi_pending
    on nested VM-Entry and raise KVM_REQ_EVENT, i.e. rely on existing code to
    route the NMI to the correct domain.
    
    On nested VM-Exit, reverse the process and set/clear V_NMI_PENDING for L1
    based one whether nmi_pending is zero or non-zero.  There is no need to
    consider vmcb02 in this case, as V_NMI_PENDING can be set in vmcb02 if
    vNMI is disabled for L2, and if vNMI is enabled for L2, then L1 and L2
    have different NMI contexts.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 09/11] KVM: SVM: Add VNMI bit definition
  2023-02-27  8:40 ` [PATCHv4 09/11] KVM: SVM: Add VNMI bit definition Santosh Shukla
@ 2023-03-23  0:54   ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2023-03-23  0:54 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Mon, Feb 27, 2023, Santosh Shukla wrote:
> VNMI exposes 3 capability bits (V_NMI, V_NMI_MASK, and V_NMI_ENABLE) to
> virtualize NMI and NMI_MASK, Those capability bits are part of
> VMCB::intr_ctrl -
> V_NMI_PENDING_MASK(11) - Indicates whether a virtual NMI is pending in the
> guest.
> V_NMI_BLOCKING_MASK(12) - Indicates whether virtual NMI is masked in the
> guest.
> V_NMI_ENABLE_MASK(26) - Enables the NMI virtualization feature for the
> guest.

This is way harder to read than it needs to be.  The intent of the various rules
for line length and whatnot is to make code/changelogs easier to read.  That intent
is lost if code/changelogs are written without actually considering the rules.
In other words, don't write changeloges, comments, etc. without thinking about how
the result will look when the line length rules apply.

    Add defines for three new bits in VMVC::int_ctrl that are part of SVM's
    Virtual NMI (vNMI) support:
    
      V_NMI_PENDING_MASK(11)  - Virtual NMI is pending
      V_NMI_BLOCKING_MASK(12) - Virtual NMI is masked
      V_NMI_ENABLE_MASK(26)   - Enable NMI virtualization
    
    To "inject" an NMI, the hypervisor (KVM) sets V_NMI_PENDING.  When the
    CPU services the pending vNMI, hardware clears V_NMI_PENDING and sets
    V_NMI_BLOCKING, e.g. to indicate that the vCPU is handling an NMI.
    Hardware clears V_NMI_BLOCKING upon successful execution of IRET, or if a
    VM-Exit occurs while delivering the virtual NMI.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 00/11] SVM: virtual NMI
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (11 preceding siblings ...)
  2023-03-10  9:19 ` [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
@ 2023-03-23  0:57 ` Sean Christopherson
  2023-03-23  1:14   ` Sean Christopherson
  2023-03-23 22:53 ` Sean Christopherson
  13 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2023-03-23  0:57 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Mon, Feb 27, 2023, Santosh Shukla wrote:
> Maxim Levitsky (2):
>   KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
>   KVM: SVM: add wrappers to enable/disable IRET interception
> 
> Santosh Shukla (6):
>   KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is
>     intercepting VINTR
>   KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
>   x86/cpu: Add CPUID feature bit for VNMI
>   KVM: SVM: Add VNMI bit definition
>   KVM: x86: add support for delayed virtual NMI injection interface
>   KVM: nSVM: implement support for nested VNMI
> 
> Sean Christopherson (3):
>   KVM: x86: Raise an event request when processing NMIs if an NMI is
>     pending
>   KVM: x86: Tweak the code and comment related to handling concurrent
>     NMIs
>   KVM: x86: Save/restore all NMIs when multiple NMIs are pending
> 
>  arch/x86/include/asm/cpufeatures.h |   1 +
>  arch/x86/include/asm/kvm-x86-ops.h |   2 +
>  arch/x86/include/asm/kvm_host.h    |  11 ++-
>  arch/x86/include/asm/svm.h         |   9 ++
>  arch/x86/kvm/svm/nested.c          |  94 +++++++++++++++---
>  arch/x86/kvm/svm/svm.c             | 152 +++++++++++++++++++++++------
>  arch/x86/kvm/svm/svm.h             |  28 ++++++
>  arch/x86/kvm/x86.c                 |  46 +++++++--
>  8 files changed, 289 insertions(+), 54 deletions(-)

Code looks good overall, I'll fixup the changelogs and comments myself.  I just
need to run it through my usual test flow, which I should get done tomorrow.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 00/11] SVM: virtual NMI
  2023-03-23  0:57 ` Sean Christopherson
@ 2023-03-23  1:14   ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2023-03-23  1:14 UTC (permalink / raw)
  To: Santosh Shukla
  Cc: kvm, pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Wed, Mar 22, 2023, Sean Christopherson wrote:
> On Mon, Feb 27, 2023, Santosh Shukla wrote:
> > Maxim Levitsky (2):
> >   KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
> >   KVM: SVM: add wrappers to enable/disable IRET interception
> > 
> > Santosh Shukla (6):
> >   KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is
> >     intercepting VINTR
> >   KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
> >   x86/cpu: Add CPUID feature bit for VNMI
> >   KVM: SVM: Add VNMI bit definition
> >   KVM: x86: add support for delayed virtual NMI injection interface
> >   KVM: nSVM: implement support for nested VNMI
> > 
> > Sean Christopherson (3):
> >   KVM: x86: Raise an event request when processing NMIs if an NMI is
> >     pending
> >   KVM: x86: Tweak the code and comment related to handling concurrent
> >     NMIs
> >   KVM: x86: Save/restore all NMIs when multiple NMIs are pending
> > 
> >  arch/x86/include/asm/cpufeatures.h |   1 +
> >  arch/x86/include/asm/kvm-x86-ops.h |   2 +
> >  arch/x86/include/asm/kvm_host.h    |  11 ++-
> >  arch/x86/include/asm/svm.h         |   9 ++
> >  arch/x86/kvm/svm/nested.c          |  94 +++++++++++++++---
> >  arch/x86/kvm/svm/svm.c             | 152 +++++++++++++++++++++++------
> >  arch/x86/kvm/svm/svm.h             |  28 ++++++
> >  arch/x86/kvm/x86.c                 |  46 +++++++--
> >  8 files changed, 289 insertions(+), 54 deletions(-)
> 
> Code looks good overall, I'll fixup the changelogs and comments myself.  I just
> need to run it through my usual test flow, which I should get done tomorrow.

Gah, saw something shiny and forgot to finish my thought.

My plan is to get this somewhat speculatively applied and soaking in linux-next asap,
even though the cpufeatures.h change needs more eyeballs.  I'll fixup and force push
if necessary; unless I'm missing something, this is the only SVM specific series
that's destined for 6.4.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 00/11] SVM: virtual NMI
  2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
                   ` (12 preceding siblings ...)
  2023-03-23  0:57 ` Sean Christopherson
@ 2023-03-23 22:53 ` Sean Christopherson
  2023-03-24  8:25   ` Santosh Shukla
  13 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2023-03-23 22:53 UTC (permalink / raw)
  To: Sean Christopherson, kvm, Santosh Shukla
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets

On Mon, 27 Feb 2023 14:10:05 +0530, Santosh Shukla wrote:
> v2:
> https://lore.kernel.org/all/0f56e139-4c7f-5220-a4a2-99f87f45fd83@amd.com/
> 
> v3:
> https://lore.kernel.org/all/20230227035400.1498-1-santosh.shukla@amd.com/
>  - 09/11: Clubbed x86_ops delayed NMI with vNMI changes into one,
>    for better readability purpose (Sean Suggestion)
>  - Series includes suggestion and fixes proposed in v2 series.
>    Refer each patch for change history(v2-->v3).
> 
> [...]

Applied to kvm-x86 svm.  As mentioned in a previous reply, this is somewhat
speculative, i.e. needs acks for the cpufeatures.h change and might get
overwritten by a force push.

[01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR
        https://github.com/kvm-x86/linux/commit/5faaffab5ba8
[02/11] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
        https://github.com/kvm-x86/linux/commit/7334ede457c6
[03/11] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
        https://github.com/kvm-x86/linux/commit/5d1ec4565200
[04/11] KVM: SVM: add wrappers to enable/disable IRET interception
        https://github.com/kvm-x86/linux/commit/772f254d4d56
[05/11] KVM: x86: Raise an event request when processing NMIs if an NMI is pending
        https://github.com/kvm-x86/linux/commit/2cb9317377ca
[06/11] KVM: x86: Tweak the code and comment related to handling concurrent NMIs
        https://github.com/kvm-x86/linux/commit/400fee8c9b2d
[07/11] KVM: x86: Save/restore all NMIs when multiple NMIs are pending
        https://github.com/kvm-x86/linux/commit/ab2ee212a57b
[08/11] x86/cpufeatures: Redefine synthetic virtual NMI bit as AMD's "real" vNMI
        https://github.com/kvm-x86/linux/commit/3763bf58029f
[09/11] KVM: SVM: Add VNMI bit definition
        https://github.com/kvm-x86/linux/commit/1c4522ab13b1
[10/11] KVM: x86: add support for delayed virtual NMI injection interface
        https://github.com/kvm-x86/linux/commit/fa4c027a7956
[11/11] KVM: nSVM: implement support for nested VNMI
        https://github.com/kvm-x86/linux/commit/0977cfac6e76

--
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/fixes

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv4 00/11] SVM: virtual NMI
  2023-03-23 22:53 ` Sean Christopherson
@ 2023-03-24  8:25   ` Santosh Shukla
  0 siblings, 0 replies; 22+ messages in thread
From: Santosh Shukla @ 2023-03-24  8:25 UTC (permalink / raw)
  To: Sean Christopherson, kvm
  Cc: pbonzini, jmattson, joro, linux-kernel, mail, mlevitsk,
	thomas.lendacky, vkuznets



On 3/24/2023 4:23 AM, Sean Christopherson wrote:
> On Mon, 27 Feb 2023 14:10:05 +0530, Santosh Shukla wrote:
>> v2:
>> https://lore.kernel.org/all/0f56e139-4c7f-5220-a4a2-99f87f45fd83@amd.com/
>>
>> v3:
>> https://lore.kernel.org/all/20230227035400.1498-1-santosh.shukla@amd.com/
>>  - 09/11: Clubbed x86_ops delayed NMI with vNMI changes into one,
>>    for better readability purpose (Sean Suggestion)
>>  - Series includes suggestion and fixes proposed in v2 series.
>>    Refer each patch for change history(v2-->v3).
>>
>> [...]
> 
> Applied to kvm-x86 svm.  As mentioned in a previous reply, this is somewhat
> speculative, i.e. needs acks for the cpufeatures.h change and might get
> overwritten by a force push.
> 

Thank-you Sean!,.

Best Regards,
Santosh

> [01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR
>         https://github.com/kvm-x86/linux/commit/5faaffab5ba8
> [02/11] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
>         https://github.com/kvm-x86/linux/commit/7334ede457c6
> [03/11] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
>         https://github.com/kvm-x86/linux/commit/5d1ec4565200
> [04/11] KVM: SVM: add wrappers to enable/disable IRET interception
>         https://github.com/kvm-x86/linux/commit/772f254d4d56
> [05/11] KVM: x86: Raise an event request when processing NMIs if an NMI is pending
>         https://github.com/kvm-x86/linux/commit/2cb9317377ca
> [06/11] KVM: x86: Tweak the code and comment related to handling concurrent NMIs
>         https://github.com/kvm-x86/linux/commit/400fee8c9b2d
> [07/11] KVM: x86: Save/restore all NMIs when multiple NMIs are pending
>         https://github.com/kvm-x86/linux/commit/ab2ee212a57b
> [08/11] x86/cpufeatures: Redefine synthetic virtual NMI bit as AMD's "real" vNMI
>         https://github.com/kvm-x86/linux/commit/3763bf58029f
> [09/11] KVM: SVM: Add VNMI bit definition
>         https://github.com/kvm-x86/linux/commit/1c4522ab13b1
> [10/11] KVM: x86: add support for delayed virtual NMI injection interface
>         https://github.com/kvm-x86/linux/commit/fa4c027a7956
> [11/11] KVM: nSVM: implement support for nested VNMI
>         https://github.com/kvm-x86/linux/commit/0977cfac6e76
> 
> --
> https://github.com/kvm-x86/linux/tree/next
> https://github.com/kvm-x86/linux/tree/fixes


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-03-24  8:27 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-27  8:40 [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 01/11] KVM: nSVM: Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting VINTR Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 02/11] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0 Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 03/11] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 04/11] KVM: SVM: add wrappers to enable/disable IRET interception Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 05/11] KVM: x86: Raise an event request when processing NMIs if an NMI is pending Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 06/11] KVM: x86: Tweak the code and comment related to handling concurrent NMIs Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 07/11] KVM: x86: Save/restore all NMIs when multiple NMIs are pending Santosh Shukla
2023-02-27  8:40 ` [PATCHv4 08/11] x86/cpu: Add CPUID feature bit for VNMI Santosh Shukla
2023-03-22 19:07   ` Sean Christopherson
2023-02-27  8:40 ` [PATCHv4 09/11] KVM: SVM: Add VNMI bit definition Santosh Shukla
2023-03-23  0:54   ` Sean Christopherson
2023-02-27  8:40 ` [PATCHv4 10/11] KVM: x86: add support for delayed virtual NMI injection interface Santosh Shukla
2023-03-23  0:49   ` Sean Christopherson
2023-02-27  8:40 ` [PATCHv4 11/11] KVM: nSVM: implement support for nested VNMI Santosh Shukla
2023-03-23  0:50   ` Sean Christopherson
2023-03-10  9:19 ` [PATCHv4 00/11] SVM: virtual NMI Santosh Shukla
2023-03-10 17:02   ` Sean Christopherson
2023-03-23  0:57 ` Sean Christopherson
2023-03-23  1:14   ` Sean Christopherson
2023-03-23 22:53 ` Sean Christopherson
2023-03-24  8:25   ` Santosh Shukla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).