All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] hvm: add hvm_funcs hooks for msr intercept handling
@ 2023-02-27  7:56 Xenia Ragiadakou
  2023-02-27  7:56 ` [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise Xenia Ragiadakou
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-27  7:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

This patch series aims to make the msr intercept handling, performed in
vpmu code, virtualization technology agnostic.
It creates a common interface for setting/clearing the msr intercepts and
then add hooks to the corresponding hvm_funcs table to be able to call the
svm/vmx specific handlers through a generic hvm wrapper function.

Xenia Ragiadakou (4):
  x86/vpmu: rename {svm,vmx}_vpmu_initialise to
    {amd,core2}_vpmu_initialise
  x86/svm: split svm_intercept_msr() into
    svm_{set,clear}_msr_intercept()
  x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
  x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()

 xen/arch/x86/cpu/vpmu_amd.c             | 15 ++---
 xen/arch/x86/cpu/vpmu_intel.c           | 30 +++++-----
 xen/arch/x86/hvm/svm/svm.c              | 80 ++++++++++++++++---------
 xen/arch/x86/hvm/vmx/vmcs.c             | 38 ++++++------
 xen/arch/x86/hvm/vmx/vmx.c              | 46 +++++++-------
 xen/arch/x86/include/asm/hvm/hvm.h      | 32 ++++++++++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 13 ++--
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 14 +----
 8 files changed, 156 insertions(+), 112 deletions(-)

-- 
2.37.2



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise
  2023-02-27  7:56 [PATCH 0/4] hvm: add hvm_funcs hooks for msr intercept handling Xenia Ragiadakou
@ 2023-02-27  7:56 ` Xenia Ragiadakou
  2023-02-28 13:54   ` Jan Beulich
  2023-02-27  7:56 ` [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Xenia Ragiadakou
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-27  7:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

PMU virtualization is not dependent on the hardware virtualization support.
Rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise because
the {svm,vmx} prefix is misleading.

Take the opportunity to remove the also misleading comment stating that
vpmu is specific to hvm guests, and correct the filename.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/cpu/vpmu_amd.c   | 6 +++---
 xen/arch/x86/cpu/vpmu_intel.c | 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index 58794a16f0..9df739aa3f 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -1,5 +1,5 @@
 /*
- * vpmu.c: PMU virtualization for HVM domain.
+ * vpmu_amd.c: AMD specific PMU virtualization.
  *
  * Copyright (c) 2010, Advanced Micro Devices, Inc.
  * Parts of this code are Copyright (c) 2007, Intel Corporation
@@ -480,7 +480,7 @@ static void cf_check amd_vpmu_dump(const struct vcpu *v)
     }
 }
 
-static int cf_check svm_vpmu_initialise(struct vcpu *v)
+static int cf_check amd_vpmu_initialise(struct vcpu *v)
 {
     struct xen_pmu_amd_ctxt *ctxt;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
@@ -527,7 +527,7 @@ static int cf_check amd_allocate_context(struct vcpu *v)
 #endif
 
 static const struct arch_vpmu_ops __initconst_cf_clobber amd_vpmu_ops = {
-    .initialise = svm_vpmu_initialise,
+    .initialise = amd_vpmu_initialise,
     .do_wrmsr = amd_vpmu_do_wrmsr,
     .do_rdmsr = amd_vpmu_do_rdmsr,
     .do_interrupt = amd_vpmu_do_interrupt,
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index a8df52579d..bcfa187a14 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -1,5 +1,5 @@
 /*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ * vpmu_intel.c: CORE 2 specific PMU virtualization.
  *
  * Copyright (c) 2007, Intel Corporation.
  *
@@ -833,7 +833,7 @@ static void cf_check core2_vpmu_destroy(struct vcpu *v)
     vpmu_clear(vpmu);
 }
 
-static int cf_check vmx_vpmu_initialise(struct vcpu *v)
+static int cf_check core2_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     u64 msr_content;
@@ -898,7 +898,7 @@ static int cf_check vmx_vpmu_initialise(struct vcpu *v)
 }
 
 static const struct arch_vpmu_ops __initconst_cf_clobber core2_vpmu_ops = {
-    .initialise = vmx_vpmu_initialise,
+    .initialise = core2_vpmu_initialise,
     .do_wrmsr = core2_vpmu_do_wrmsr,
     .do_rdmsr = core2_vpmu_do_rdmsr,
     .do_interrupt = core2_vpmu_do_interrupt,
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
  2023-02-27  7:56 [PATCH 0/4] hvm: add hvm_funcs hooks for msr intercept handling Xenia Ragiadakou
  2023-02-27  7:56 ` [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise Xenia Ragiadakou
@ 2023-02-27  7:56 ` Xenia Ragiadakou
  2023-02-28 14:20   ` Jan Beulich
  2023-02-27  7:56 ` [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Xenia Ragiadakou
  2023-02-27  7:56 ` [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Xenia Ragiadakou
  3 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-27  7:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

This change aims to render the control interface of MSR intercepts identical
between SVM and VMX code, so that the control of the MSR intercept in common
code can be done through an hvm_funcs callback.

Create two new functions:
- svm_set_msr_intercept(), enables interception of read/write accesses to the
  corresponding MSR, by setting the corresponding read/write bits in the MSRPM
  based on the flags
- svm_clear_msr_intercept(), disables interception of read/write accesses to
  the corresponding MSR, by clearing the corresponding read/write bits in the
  MSRPM based on the flags

More specifically:
- if flag is MSR_R, the functions {set,clear} the MSRPM bit that controls read
  access to the MSR
- if flag is MSR_W, the functions {set,clear} the MSRPM bit that controls write
  access to the MSR
- if flag is MSR_RW, the functions {set,clear} both MSRPM bits

Place the definitions of the flags in asm/hvm/hvm.h because there is the
intention to be used by VMX code as well.

Remove svm_intercept_msr() and MSR_INTERCEPT_* definitions, and use the new
functions and flags instead.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/cpu/vpmu_amd.c             |  9 +--
 xen/arch/x86/hvm/svm/svm.c              | 80 ++++++++++++++++---------
 xen/arch/x86/include/asm/hvm/hvm.h      |  4 ++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 13 ++--
 4 files changed, 66 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index 9df739aa3f..ed6706959e 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -165,8 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+        svm_clear_msr_intercept(v, counters[i], MSR_RW);
+        svm_set_msr_intercept(v, ctrls[i], MSR_W);
+        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
     }
 
     msr_bitmap_on(vpmu);
@@ -179,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+        svm_set_msr_intercept(v, counters[i], MSR_RW);
+        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
     }
 
     msr_bitmap_off(vpmu);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index a43bcf2e92..eb144272f4 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -288,23 +288,34 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
     return msr_bit;
 }
 
-void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags)
+void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
 {
-    unsigned long *msr_bit;
-    const struct domain *d = v->domain;
+    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
+
+    if ( msr_bit == NULL )
+        return;
 
-    msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
-    BUG_ON(msr_bit == NULL);
     msr &= 0x1fff;
 
-    if ( flags & MSR_INTERCEPT_READ )
+    if ( flags & MSR_R )
          __set_bit(msr * 2, msr_bit);
-    else if ( !monitored_msr(d, msr) )
-         __clear_bit(msr * 2, msr_bit);
-
-    if ( flags & MSR_INTERCEPT_WRITE )
+    if ( flags & MSR_W )
         __set_bit(msr * 2 + 1, msr_bit);
-    else if ( !monitored_msr(d, msr) )
+}
+
+void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
+{
+    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
+
+    if ( msr_bit == NULL )
+        return;
+
+    if ( monitored_msr(v->domain, msr) )
+        return;
+
+    if ( flags & MSR_R )
+        __clear_bit(msr * 2, msr_bit);
+    if ( flags & MSR_W )
         __clear_bit(msr * 2 + 1, msr_bit);
 }
 
@@ -312,8 +323,10 @@ static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t msr)
 {
     struct vcpu *v;
 
-    for_each_vcpu ( d, v )
-        svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE);
+    for_each_vcpu ( d, v ) {
+        svm_set_msr_intercept(v, msr, MSR_W);
+        svm_clear_msr_intercept(v, msr, MSR_R);
+    }
 }
 
 static void svm_save_dr(struct vcpu *v)
@@ -330,10 +343,10 @@ static void svm_save_dr(struct vcpu *v)
 
     if ( v->domain->arch.cpuid->extd.dbext )
     {
-        svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW);
 
         rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
         rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
@@ -361,10 +374,10 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v)
 
     if ( v->domain->arch.cpuid->extd.dbext )
     {
-        svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW);
 
         wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
         wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
@@ -595,22 +608,31 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     vmcb_set_exception_intercepts(vmcb, bitmap);
 
     /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */
-    svm_intercept_msr(v, MSR_SPEC_CTRL,
-                      cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+    if ( cp->extd.ibrs )
+        svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
+    else
+        svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
     /*
      * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest
      * setting and avoid having to perform a rdmsr on vmexit to get the guest
      * setting even if VIRT_SSBD is offered to Xen itself.
      */
-    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
-                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
-                      !cpu_has_amd_ssbd ?
-                      MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW);
+    if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd )
+    {
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W);
+        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R);
+    }
+    else
+    {
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
+    }
 
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
-    svm_intercept_msr(v, MSR_PRED_CMD,
-                      cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+    if ( cp->extd.ibpb )
+        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
+    else
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
 }
 
 void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 43d3fc2498..f853e2f3e8 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -261,6 +261,10 @@ extern struct hvm_function_table hvm_funcs;
 extern bool_t hvm_enabled;
 extern s8 hvm_port80_allowed;
 
+#define MSR_R       BIT(0, U)
+#define MSR_W       BIT(1, U)
+#define MSR_RW      (MSR_W | MSR_R)
+
 extern const struct hvm_function_table *start_svm(void);
 extern const struct hvm_function_table *start_vmx(void);
 
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index e87728fa81..ed2e55e5cf 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v);
 
 void setup_vmcb_dump(void);
 
-#define MSR_INTERCEPT_NONE    0
-#define MSR_INTERCEPT_READ    1
-#define MSR_INTERCEPT_WRITE   2
-#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
-void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
-#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
-#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
+void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
+void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
+#define svm_disable_intercept_for_msr(v, msr) \
+    svm_clear_msr_intercept((v), (msr), MSR_RW)
+#define svm_enable_intercept_for_msr(v, msr) \
+    svm_set_intercept_msr((v), (msr), MSR_RW)
 
 /*
  * VMCB accessor functions.
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
  2023-02-27  7:56 [PATCH 0/4] hvm: add hvm_funcs hooks for msr intercept handling Xenia Ragiadakou
  2023-02-27  7:56 ` [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise Xenia Ragiadakou
  2023-02-27  7:56 ` [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Xenia Ragiadakou
@ 2023-02-27  7:56 ` Xenia Ragiadakou
  2023-02-28 14:31   ` Jan Beulich
  2023-02-27  7:56 ` [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Xenia Ragiadakou
  3 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-27  7:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Jun Nakajima, Kevin Tian, Jan Beulich, Andrew Cooper,
	Roger Pau Monné,
	Wei Liu

Replace enum vmx_msr_intercept_type with the msr access flags, defined
in hvm.h, so that the functions {svm,vmx}_{set,clear}_msr_intercept()
share the same prototype.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/cpu/vpmu_intel.c           | 24 +++++++-------
 xen/arch/x86/hvm/vmx/vmcs.c             | 38 ++++++++++-----------
 xen/arch/x86/hvm/vmx/vmx.c              | 44 ++++++++++++-------------
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 14 ++------
 4 files changed, 54 insertions(+), 66 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index bcfa187a14..bd91c79a36 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
 
     /* Allow Read/Write PMU Counters MSR Directly. */
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW);
+            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R);
+        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R);
-    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R);
+    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
@@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
     unsigned int i;
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW);
+            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R);
+        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R);
-    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R);
+    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index ed71ecfb62..22c12509d5 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -902,8 +902,7 @@ static void vmx_set_host_env(struct vcpu *v)
               (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code);
 }
 
-void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             enum vmx_msr_intercept_type type)
+void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
     struct domain *d = v->domain;
@@ -917,25 +916,24 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
 
     if ( msr <= 0x1fff )
     {
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             clear_bit(msr, msr_bitmap->read_low);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             clear_bit(msr, msr_bitmap->write_low);
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             clear_bit(msr, msr_bitmap->read_high);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             clear_bit(msr, msr_bitmap->write_high);
     }
     else
         ASSERT(!"MSR out of range for interception\n");
 }
 
-void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           enum vmx_msr_intercept_type type)
+void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
 
@@ -945,17 +943,17 @@ void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
 
     if ( msr <= 0x1fff )
     {
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             set_bit(msr, msr_bitmap->read_low);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             set_bit(msr, msr_bitmap->write_low);
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             set_bit(msr, msr_bitmap->read_high);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             set_bit(msr, msr_bitmap->write_high);
     }
     else
@@ -1162,17 +1160,17 @@ static int construct_vmcs(struct vcpu *v)
         v->arch.hvm.vmx.msr_bitmap = msr_bitmap;
         __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap));
 
-        vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_FS_BASE, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_GS_BASE, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, MSR_RW);
         if ( paging_mode_hap(d) && (!is_iommu_enabled(d) || iommu_snoop) )
-            vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+            vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW);
         if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&
              (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) )
-            vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW);
+            vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, MSR_RW);
     }
 
     /* I/O access bitmap. */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 0ec33bcc18..87c47c002c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -802,7 +802,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
      */
     if ( cp->feat.ibrsb )
     {
-        vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
         rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0);
         if ( rc )
@@ -810,7 +810,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
     }
     else
     {
-        vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
         rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST);
         if ( rc && rc != -ESRCH )
@@ -820,20 +820,20 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
 
     /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */
     if ( cp->feat.ibrsb || cp->extd.ibpb )
-        vmx_clear_msr_intercept(v, MSR_PRED_CMD,  VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_PRED_CMD, MSR_RW);
     else
-        vmx_set_msr_intercept(v, MSR_PRED_CMD,  VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_PRED_CMD, MSR_RW);
 
     /* MSR_FLUSH_CMD is safe to pass through if the guest knows about it. */
     if ( cp->feat.l1d_flush )
-        vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW);
     else
-        vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW);
 
     if ( cp->feat.pks )
-        vmx_clear_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_PKRS, MSR_RW);
     else
-        vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_PKRS, MSR_RW);
 
  out:
     vmx_vmcs_exit(v);
@@ -1429,7 +1429,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value)
 
             vmx_get_guest_pat(v, pat);
             vmx_set_guest_pat(v, uc_pat);
-            vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+            vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW);
 
             wbinvd();               /* flush possibly polluted cache */
             hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */
@@ -1440,7 +1440,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value)
             v->arch.hvm.cache_mode = NORMAL_CACHE_MODE;
             vmx_set_guest_pat(v, *pat);
             if ( !is_iommu_enabled(v->domain) || iommu_snoop )
-                vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+                vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW);
             hvm_asid_flush_vcpu(v); /* no need to flush cache */
         }
     }
@@ -1906,9 +1906,9 @@ static void cf_check vmx_update_guest_efer(struct vcpu *v)
      * into hardware, clear the read intercept to avoid unnecessary VMExits.
      */
     if ( guest_efer == v->arch.hvm.guest_efer )
-        vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R);
+        vmx_clear_msr_intercept(v, MSR_EFER, MSR_R);
     else
-        vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R);
+        vmx_set_msr_intercept(v, MSR_EFER, MSR_R);
 }
 
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
@@ -2335,7 +2335,7 @@ static void cf_check vmx_enable_msr_interception(struct domain *d, uint32_t msr)
     struct vcpu *v;
 
     for_each_vcpu ( d, v )
-        vmx_set_msr_intercept(v, msr, VMX_MSR_W);
+        vmx_set_msr_intercept(v, msr, MSR_W);
 }
 
 static void cf_check vmx_vcpu_update_eptp(struct vcpu *v)
@@ -3502,17 +3502,17 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v)
             {
                 for ( msr = MSR_X2APIC_FIRST;
                       msr <= MSR_X2APIC_LAST; msr++ )
-                    vmx_clear_msr_intercept(v, msr, VMX_MSR_R);
+                    vmx_clear_msr_intercept(v, msr, MSR_R);
 
-                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R);
-                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R);
-                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, MSR_R);
             }
             if ( cpu_has_vmx_virtual_intr_delivery )
             {
-                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W);
-                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W);
-                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, MSR_W);
             }
         }
         else
@@ -3523,7 +3523,7 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v)
            SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
         for ( msr = MSR_X2APIC_FIRST;
               msr <= MSR_X2APIC_LAST; msr++ )
-            vmx_set_msr_intercept(v, msr, VMX_MSR_RW);
+            vmx_set_msr_intercept(v, msr, MSR_RW);
 
     vmx_update_secondary_exec_control(v);
     vmx_vmcs_exit(v);
@@ -3659,7 +3659,7 @@ static int cf_check vmx_msr_write_intercept(
                         return X86EMUL_OKAY;
                     }
 
-                    vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW);
+                    vmx_clear_msr_intercept(v, lbr->base + i, MSR_RW);
                 }
             }
 
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
index 0a84e74478..e08c506be5 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -644,18 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
     return 0;
 }
 
-
-/* MSR intercept bitmap infrastructure. */
-enum vmx_msr_intercept_type {
-    VMX_MSR_R  = 1,
-    VMX_MSR_W  = 2,
-    VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W,
-};
-
-void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             enum vmx_msr_intercept_type type);
-void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           enum vmx_msr_intercept_type type);
+void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type);
+void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type);
 void vmx_vmcs_switch(paddr_t from, paddr_t to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()
  2023-02-27  7:56 [PATCH 0/4] hvm: add hvm_funcs hooks for msr intercept handling Xenia Ragiadakou
                   ` (2 preceding siblings ...)
  2023-02-27  7:56 ` [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Xenia Ragiadakou
@ 2023-02-27  7:56 ` Xenia Ragiadakou
  2023-02-28 14:58   ` Jan Beulich
  3 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-27  7:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr
intercept in common vpmu code.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/cpu/vpmu_amd.c             | 10 ++++-----
 xen/arch/x86/cpu/vpmu_intel.c           | 24 ++++++++++-----------
 xen/arch/x86/hvm/svm/svm.c              |  4 ++--
 xen/arch/x86/hvm/vmx/vmcs.c             |  4 ++--
 xen/arch/x86/hvm/vmx/vmx.c              |  2 ++
 xen/arch/x86/include/asm/hvm/hvm.h      | 28 +++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h |  4 ++--
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h |  4 ++--
 8 files changed, 55 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index ed6706959e..a306297a69 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -165,9 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_clear_msr_intercept(v, counters[i], MSR_RW);
-        svm_set_msr_intercept(v, ctrls[i], MSR_W);
-        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
+        hvm_clear_msr_intercept(v, counters[i], MSR_RW);
+        hvm_set_msr_intercept(v, ctrls[i], MSR_W);
+        hvm_clear_msr_intercept(v, ctrls[i], MSR_R);
     }
 
     msr_bitmap_on(vpmu);
@@ -180,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_set_msr_intercept(v, counters[i], MSR_RW);
-        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
+        hvm_set_msr_intercept(v, counters[i], MSR_RW);
+        hvm_set_msr_intercept(v, ctrls[i], MSR_RW);
     }
 
     msr_bitmap_off(vpmu);
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index bd91c79a36..46ae38a326 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
 
     /* Allow Read/Write PMU Counters MSR Directly. */
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
+        hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
+        hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
+            hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
+        hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
-    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
+    hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
@@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
     unsigned int i;
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
+        hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
+        hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
+            hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
+        hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
-    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
+    hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index eb144272f4..e54dc08e8a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -288,7 +288,7 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
     return msr_bit;
 }
 
-void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
+void cf_check svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
 {
     unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
 
@@ -303,7 +303,7 @@ void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
         __set_bit(msr * 2 + 1, msr_bit);
 }
 
-void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
+void cf_check svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
 {
     unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
 
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 22c12509d5..3d0022f392 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -902,7 +902,7 @@ static void vmx_set_host_env(struct vcpu *v)
               (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code);
 }
 
-void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type)
+void cf_check vmx_clear_msr_intercept(struct vcpu *v, uint32_t msr, int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
     struct domain *d = v->domain;
@@ -933,7 +933,7 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type)
         ASSERT(!"MSR out of range for interception\n");
 }
 
-void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type)
+void cf_check vmx_set_msr_intercept(struct vcpu *v, uint32_t msr, int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 87c47c002c..d3f2b3add4 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2765,6 +2765,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_vlapic_mode = vmx_vlapic_msr_changed,
     .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
+    .set_msr_intercept    = vmx_set_msr_intercept,
+    .clear_msr_intercept  = vmx_clear_msr_intercept,
     .enable_msr_interception = vmx_enable_msr_interception,
     .altp2m_vcpu_update_p2m = vmx_vcpu_update_eptp,
     .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index f853e2f3e8..dd9aa42d0a 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -224,6 +224,8 @@ struct hvm_function_table {
                                 paddr_t *L1_gpa, unsigned int *page_order,
                                 uint8_t *p2m_acc, struct npfec npfec);
 
+    void (*set_msr_intercept)(struct vcpu *v, uint32_t msr, int flags);
+    void (*clear_msr_intercept)(struct vcpu *v, uint32_t msr, int flags);
     void (*enable_msr_interception)(struct domain *d, uint32_t msr);
 
     /* Alternate p2m */
@@ -658,6 +660,20 @@ static inline int nhvm_hap_walk_L1_p2m(
         v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec);
 }
 
+static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr,
+                                         int flags)
+{
+    if ( hvm_funcs.set_msr_intercept )
+        alternative_vcall(hvm_funcs.set_msr_intercept, v, msr, flags);
+}
+
+static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr,
+                                           int flags)
+{
+    if ( hvm_funcs.clear_msr_intercept )
+        alternative_vcall(hvm_funcs.clear_msr_intercept, v, msr, flags);
+}
+
 static inline void hvm_enable_msr_interception(struct domain *d, uint32_t msr)
 {
     alternative_vcall(hvm_funcs.enable_msr_interception, d, msr);
@@ -916,6 +932,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
     ASSERT_UNREACHABLE();
 }
 
+static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr,
+                                         int flags)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr,
+                                           int flags)
+{
+    ASSERT_UNREACHABLE();
+}
+
 #define is_viridian_domain(d) ((void)(d), false)
 #define is_viridian_vcpu(v) ((void)(v), false)
 #define has_viridian_time_ref_count(d) ((void)(d), false)
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index ed2e55e5cf..dbe8ba89cc 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -585,8 +585,8 @@ void svm_destroy_vmcb(struct vcpu *v);
 
 void setup_vmcb_dump(void);
 
-void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
-void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
+void cf_check svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
+void cf_check svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
 #define svm_disable_intercept_for_msr(v, msr) \
     svm_clear_msr_intercept((v), (msr), MSR_RW)
 #define svm_enable_intercept_for_msr(v, msr) \
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
index e08c506be5..f2880c8122 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -644,8 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
     return 0;
 }
 
-void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type);
-void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type);
+void cf_check vmx_clear_msr_intercept(struct vcpu *v, uint32_t msr, int type);
+void cf_check vmx_set_msr_intercept(struct vcpu *v, uint32_t msr, int type);
 void vmx_vmcs_switch(paddr_t from, paddr_t to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise
  2023-02-27  7:56 ` [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise Xenia Ragiadakou
@ 2023-02-28 13:54   ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 13:54 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian, xen-devel

On 27.02.2023 08:56, Xenia Ragiadakou wrote:
> PMU virtualization is not dependent on the hardware virtualization support.
> Rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise because
> the {svm,vmx} prefix is misleading.
> 
> Take the opportunity to remove the also misleading comment stating that
> vpmu is specific to hvm guests, and correct the filename.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
  2023-02-27  7:56 ` [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Xenia Ragiadakou
@ 2023-02-28 14:20   ` Jan Beulich
  2023-02-28 15:05     ` Xenia Ragiadakou
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 14:20 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 27.02.2023 08:56, Xenia Ragiadakou wrote:
> This change aims to render the control interface of MSR intercepts identical
> between SVM and VMX code, so that the control of the MSR intercept in common
> code can be done through an hvm_funcs callback.
> 
> Create two new functions:
> - svm_set_msr_intercept(), enables interception of read/write accesses to the
>   corresponding MSR, by setting the corresponding read/write bits in the MSRPM
>   based on the flags
> - svm_clear_msr_intercept(), disables interception of read/write accesses to
>   the corresponding MSR, by clearing the corresponding read/write bits in the
>   MSRPM based on the flags

In how far is VMX'es present model better than SVM's? They both have pros and
cons, depending on the specific use. I'm not asking to do it the other way
around (at least not yet), I'd merely like to understand why we're going to
gain two new hooks (if I'm not mistaken) when we could also get away with
just one.

> --- a/xen/arch/x86/cpu/vpmu_amd.c
> +++ b/xen/arch/x86/cpu/vpmu_amd.c
> @@ -165,8 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> -        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
> -        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
> +        svm_clear_msr_intercept(v, counters[i], MSR_RW);
> +        svm_set_msr_intercept(v, ctrls[i], MSR_W);
> +        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
>      }
>  
>      msr_bitmap_on(vpmu);
> @@ -179,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> -        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
> -        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
> +        svm_set_msr_intercept(v, counters[i], MSR_RW);
> +        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
>      }

This, aiui, restores back original state (I question the condition that the
caller uses, though, but that's a separate issue). Therefore is the single
"set" in the earlier function actually needed?

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -288,23 +288,34 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
>      return msr_bit;
>  }
>  
> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags)
> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags)

Can the last parameter become "unsigned int", please?

>  {
> -    unsigned long *msr_bit;
> -    const struct domain *d = v->domain;
> +    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
> +
> +    if ( msr_bit == NULL )
> +        return;
>  
> -    msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
> -    BUG_ON(msr_bit == NULL);

The conversion from BUG_ON() to "return" needs explanation; I don't see
why that's warranted here. From all I can tell the case is impossible
due to the way construct_vmcb() works, and hence BUG_ON() is appropriate
(and personally I would also be fine with no check at all, provided I'm
not overlooking anything).

> @@ -312,8 +323,10 @@ static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t msr)
>  {
>      struct vcpu *v;
>  
> -    for_each_vcpu ( d, v )
> -        svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE);
> +    for_each_vcpu ( d, v ) {

Nit: Brace placement.

> @@ -595,22 +608,31 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
>      vmcb_set_exception_intercepts(vmcb, bitmap);
>  
>      /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */
> -    svm_intercept_msr(v, MSR_SPEC_CTRL,
> -                      cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
> +    if ( cp->extd.ibrs )
> +        svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
> +    else
> +        svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
>  
>      /*
>       * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest
>       * setting and avoid having to perform a rdmsr on vmexit to get the guest
>       * setting even if VIRT_SSBD is offered to Xen itself.
>       */
> -    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
> -                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
> -                      !cpu_has_amd_ssbd ?
> -                      MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW);
> +    if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd )
> +    {
> +        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W);
> +        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R);
> +    }
> +    else
> +    {
> +        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
> +    }

Preferably omit the braces for "else" here, just like you do above and ...

>      /* Give access to MSR_PRED_CMD if the guest has been told about it. */
> -    svm_intercept_msr(v, MSR_PRED_CMD,
> -                      cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
> +    if ( cp->extd.ibpb )
> +        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
> +    else
> +        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);

... here.

> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
> @@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v);
>  
>  void setup_vmcb_dump(void);
>  
> -#define MSR_INTERCEPT_NONE    0
> -#define MSR_INTERCEPT_READ    1
> -#define MSR_INTERCEPT_WRITE   2
> -#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
> -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
> -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
> +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
> +#define svm_disable_intercept_for_msr(v, msr) \
> +    svm_clear_msr_intercept((v), (msr), MSR_RW)
> +#define svm_enable_intercept_for_msr(v, msr) \
> +    svm_set_intercept_msr((v), (msr), MSR_RW)

Please avoid excess parentheses. Also could you clarify why you retain
these shorthands when you don't use them in the conversion that you're
doing (e.g. amd_vpmu_unset_msr_bitmap())? Are you intending them to go
away down the road?

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
  2023-02-27  7:56 ` [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Xenia Ragiadakou
@ 2023-02-28 14:31   ` Jan Beulich
  2023-02-28 14:34     ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 14:31 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Jun Nakajima, Kevin Tian, Andrew Cooper, Roger Pau Monné,
	Wei Liu, xen-devel

On 27.02.2023 08:56, Xenia Ragiadakou wrote:
> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> @@ -644,18 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
>      return 0;
>  }
>  
> -
> -/* MSR intercept bitmap infrastructure. */
> -enum vmx_msr_intercept_type {
> -    VMX_MSR_R  = 1,
> -    VMX_MSR_W  = 2,
> -    VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W,
> -};
> -
> -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
> -                             enum vmx_msr_intercept_type type);
> -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
> -                           enum vmx_msr_intercept_type type);
> +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type);
> +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type);

unsigned int please again for the last parameter each.

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
  2023-02-28 14:31   ` Jan Beulich
@ 2023-02-28 14:34     ` Jan Beulich
  2023-02-28 15:07       ` Xenia Ragiadakou
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 14:34 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Jun Nakajima, Kevin Tian, Andrew Cooper, Roger Pau Monné,
	Wei Liu, xen-devel

On 28.02.2023 15:31, Jan Beulich wrote:
> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
>> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
>> @@ -644,18 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
>>      return 0;
>>  }
>>  
>> -
>> -/* MSR intercept bitmap infrastructure. */
>> -enum vmx_msr_intercept_type {
>> -    VMX_MSR_R  = 1,
>> -    VMX_MSR_W  = 2,
>> -    VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W,
>> -};
>> -
>> -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
>> -                             enum vmx_msr_intercept_type type);
>> -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
>> -                           enum vmx_msr_intercept_type type);
>> +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type);
>> +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type);
> 
> unsigned int please again for the last parameter each.

Oh, also, another remark here towards patch 2: Note how the middle parameter
each is "unsigned int msr" here, when in SVM code you make it (kind of leave
it) uint32_t. As per ./CODING_STYLE unsigned int is to be preferred; in any
event both (and the eventual hook) want to agree.

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()
  2023-02-27  7:56 ` [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Xenia Ragiadakou
@ 2023-02-28 14:58   ` Jan Beulich
  2023-02-28 20:14     ` Xenia Ragiadakou
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 14:58 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian, xen-devel

On 27.02.2023 08:56, Xenia Ragiadakou wrote:
> Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr
> intercept in common vpmu code.

What is this going to buy us? All calls ...

> --- a/xen/arch/x86/cpu/vpmu_amd.c
> +++ b/xen/arch/x86/cpu/vpmu_amd.c
> @@ -165,9 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> -        svm_clear_msr_intercept(v, counters[i], MSR_RW);
> -        svm_set_msr_intercept(v, ctrls[i], MSR_W);
> -        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
> +        hvm_clear_msr_intercept(v, counters[i], MSR_RW);
> +        hvm_set_msr_intercept(v, ctrls[i], MSR_W);
> +        hvm_clear_msr_intercept(v, ctrls[i], MSR_R);
>      }
>  
>      msr_bitmap_on(vpmu);
> @@ -180,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> -        svm_set_msr_intercept(v, counters[i], MSR_RW);
> -        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
> +        hvm_set_msr_intercept(v, counters[i], MSR_RW);
> +        hvm_set_msr_intercept(v, ctrls[i], MSR_RW);
>      }
>  
>      msr_bitmap_off(vpmu);

... here will got to the SVM functions anyway, while ...

> --- a/xen/arch/x86/cpu/vpmu_intel.c
> +++ b/xen/arch/x86/cpu/vpmu_intel.c
> @@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
>  
>      /* Allow Read/Write PMU Counters MSR Directly. */
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
> -        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
> +        hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>  
>      for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
> -        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
> +        hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>  
>          if ( full_width_write )
> -            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
> +            hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>      }
>  
>      /* Allow Read PMU Non-global Controls Directly. */
>      for ( i = 0; i < arch_pmc_cnt; i++ )
> -        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
> +        hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>  
> -    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
> -    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
> +    hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
> +    hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>  }
>  
>  static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
> @@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
>      unsigned int i;
>  
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
> -        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
> +        hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>  
>      for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
> -        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
> +        hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>  
>          if ( full_width_write )
> -            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
> +            hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>      }
>  
>      for ( i = 0; i < arch_pmc_cnt; i++ )
> -        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
> +        hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>  
> -    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
> -    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
> +    hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
> +    hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>  }
>  
>  static inline void __core2_vpmu_save(struct vcpu *v)

... all calls here will go to VMX'es. For making either vpmu_<vendor>.c
build without that vendor's virtualization enabled, isn't it all it
takes to have ...

> @@ -916,6 +932,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>      ASSERT_UNREACHABLE();
>  }
>  
> +static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr,
> +                                         int flags)
> +{
> +    ASSERT_UNREACHABLE();
> +}
> +
> +static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr,
> +                                           int flags)
> +{
> +    ASSERT_UNREACHABLE();
> +}

... respective SVM and VMX stubs in place instead?

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
  2023-02-28 14:20   ` Jan Beulich
@ 2023-02-28 15:05     ` Xenia Ragiadakou
  2023-02-28 15:10       ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-28 15:05 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

Hi Jan,

On 2/28/23 16:20, Jan Beulich wrote:
> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>> This change aims to render the control interface of MSR intercepts identical
>> between SVM and VMX code, so that the control of the MSR intercept in common
>> code can be done through an hvm_funcs callback.
>>
>> Create two new functions:
>> - svm_set_msr_intercept(), enables interception of read/write accesses to the
>>    corresponding MSR, by setting the corresponding read/write bits in the MSRPM
>>    based on the flags
>> - svm_clear_msr_intercept(), disables interception of read/write accesses to
>>    the corresponding MSR, by clearing the corresponding read/write bits in the
>>    MSRPM based on the flags
> 
> In how far is VMX'es present model better than SVM's? They both have pros and
> cons, depending on the specific use. I'm not asking to do it the other way
> around (at least not yet), I'd merely like to understand why we're going to
> gain two new hooks (if I'm not mistaken) when we could also get away with
> just one.

SVM approach always touches both read/write bits (either by setting or 
clearing them). I thought that using the SVM approach on VMX could be 
considered a functional change (because there are parts where VMX 
assumes that a bit is already set or cleared and does not touch it).

> 
>> --- a/xen/arch/x86/cpu/vpmu_amd.c
>> +++ b/xen/arch/x86/cpu/vpmu_amd.c
>> @@ -165,8 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>>   
>>       for ( i = 0; i < num_counters; i++ )
>>       {
>> -        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
>> -        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
>> +        svm_clear_msr_intercept(v, counters[i], MSR_RW);
>> +        svm_set_msr_intercept(v, ctrls[i], MSR_W);
>> +        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
>>       }
>>   
>>       msr_bitmap_on(vpmu);
>> @@ -179,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>>   
>>       for ( i = 0; i < num_counters; i++ )
>>       {
>> -        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
>> -        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
>> +        svm_set_msr_intercept(v, counters[i], MSR_RW);
>> +        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
>>       }
> 
> This, aiui, restores back original state (I question the condition that the
> caller uses, though, but that's a separate issue). Therefore is the single
> "set" in the earlier function actually needed?

This is what the svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE) 
does, i.e it sets the WRITE and clears the READ. It is not needed if it 
is already set, but in my opinion the redundant parts should be removed 
in another patch.

> 
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -288,23 +288,34 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
>>       return msr_bit;
>>   }
>>   
>> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags)
>> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags)
> 
> Can the last parameter become "unsigned int", please?
> 
>>   {
>> -    unsigned long *msr_bit;
>> -    const struct domain *d = v->domain;
>> +    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
>> +
>> +    if ( msr_bit == NULL )
>> +        return;
>>   
>> -    msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
>> -    BUG_ON(msr_bit == NULL);
> 
> The conversion from BUG_ON() to "return" needs explanation; I don't see
> why that's warranted here. From all I can tell the case is impossible
> due to the way construct_vmcb() works, and hence BUG_ON() is appropriate
> (and personally I would also be fine with no check at all, provided I'm
> not overlooking anything).

It was my mistake I should have not removed it.

> 
>> @@ -312,8 +323,10 @@ static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t msr)
>>   {
>>       struct vcpu *v;
>>   
>> -    for_each_vcpu ( d, v )
>> -        svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE);
>> +    for_each_vcpu ( d, v ) {
> 
> Nit: Brace placement.

Sorry. I will fix.

> 
>> @@ -595,22 +608,31 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
>>       vmcb_set_exception_intercepts(vmcb, bitmap);
>>   
>>       /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */
>> -    svm_intercept_msr(v, MSR_SPEC_CTRL,
>> -                      cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
>> +    if ( cp->extd.ibrs )
>> +        svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
>> +    else
>> +        svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
>>   
>>       /*
>>        * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest
>>        * setting and avoid having to perform a rdmsr on vmexit to get the guest
>>        * setting even if VIRT_SSBD is offered to Xen itself.
>>        */
>> -    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
>> -                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
>> -                      !cpu_has_amd_ssbd ?
>> -                      MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW);
>> +    if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd )
>> +    {
>> +        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W);
>> +        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R);
>> +    }
>> +    else
>> +    {
>> +        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
>> +    }
> 
> Preferably omit the braces for "else" here, just like you do above and ...

I added them for symmetry, since the first has. I find it easier to 
follow, personally. I can omit it.

> 
>>       /* Give access to MSR_PRED_CMD if the guest has been told about it. */
>> -    svm_intercept_msr(v, MSR_PRED_CMD,
>> -                      cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
>> +    if ( cp->extd.ibpb )
>> +        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
>> +    else
>> +        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
> 
> ... here.
> 
>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>> @@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v);
>>   
>>   void setup_vmcb_dump(void);
>>   
>> -#define MSR_INTERCEPT_NONE    0
>> -#define MSR_INTERCEPT_READ    1
>> -#define MSR_INTERCEPT_WRITE   2
>> -#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
>> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
>> -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
>> -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
>> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>> +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>> +#define svm_disable_intercept_for_msr(v, msr) \
>> +    svm_clear_msr_intercept((v), (msr), MSR_RW)
>> +#define svm_enable_intercept_for_msr(v, msr) \
>> +    svm_set_intercept_msr((v), (msr), MSR_RW)
> 
> Please avoid excess parentheses. Also could you clarify why you retain
> these shorthands when you don't use them in the conversion that you're
> doing (e.g. ())? Are you intending them to go
> away down the road?

Ok.
I did not understand the question. Which shorthands?

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
  2023-02-28 14:34     ` Jan Beulich
@ 2023-02-28 15:07       ` Xenia Ragiadakou
  0 siblings, 0 replies; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-28 15:07 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Jun Nakajima, Kevin Tian, Andrew Cooper, Roger Pau Monné,
	Wei Liu, xen-devel


On 2/28/23 16:34, Jan Beulich wrote:
> On 28.02.2023 15:31, Jan Beulich wrote:
>> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>>> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
>>> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
>>> @@ -644,18 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
>>>       return 0;
>>>   }
>>>   
>>> -
>>> -/* MSR intercept bitmap infrastructure. */
>>> -enum vmx_msr_intercept_type {
>>> -    VMX_MSR_R  = 1,
>>> -    VMX_MSR_W  = 2,
>>> -    VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W,
>>> -};
>>> -
>>> -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
>>> -                             enum vmx_msr_intercept_type type);
>>> -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
>>> -                           enum vmx_msr_intercept_type type);
>>> +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type);
>>> +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type);
>>
>> unsigned int please again for the last parameter each.
> 
> Oh, also, another remark here towards patch 2: Note how the middle parameter
> each is "unsigned int msr" here, when in SVM code you make it (kind of leave
> it) uint32_t. As per ./CODING_STYLE unsigned int is to be preferred; in any
> event both (and the eventual hook) want to agree.

Thx. I will fix and keep it in mind.

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
  2023-02-28 15:05     ` Xenia Ragiadakou
@ 2023-02-28 15:10       ` Jan Beulich
  2023-02-28 15:17         ` Xenia Ragiadakou
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 15:10 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 28.02.2023 16:05, Xenia Ragiadakou wrote:
> On 2/28/23 16:20, Jan Beulich wrote:
>> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>>> This change aims to render the control interface of MSR intercepts identical
>>> between SVM and VMX code, so that the control of the MSR intercept in common
>>> code can be done through an hvm_funcs callback.
>>>
>>> Create two new functions:
>>> - svm_set_msr_intercept(), enables interception of read/write accesses to the
>>>    corresponding MSR, by setting the corresponding read/write bits in the MSRPM
>>>    based on the flags
>>> - svm_clear_msr_intercept(), disables interception of read/write accesses to
>>>    the corresponding MSR, by clearing the corresponding read/write bits in the
>>>    MSRPM based on the flags
>>
>> In how far is VMX'es present model better than SVM's? They both have pros and
>> cons, depending on the specific use. I'm not asking to do it the other way
>> around (at least not yet), I'd merely like to understand why we're going to
>> gain two new hooks (if I'm not mistaken) when we could also get away with
>> just one.
> 
> SVM approach always touches both read/write bits (either by setting or 
> clearing them). I thought that using the SVM approach on VMX could be 
> considered a functional change (because there are parts where VMX 
> assumes that a bit is already set or cleared and does not touch it).

As per my comment on the last patch a question is whether both actually
need to become uniform. But if they do, then the new model should imo
be followed right away, and that VMX'es simply leaving bits alone when
already in known state.

>>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>> @@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v);
>>>   
>>>   void setup_vmcb_dump(void);
>>>   
>>> -#define MSR_INTERCEPT_NONE    0
>>> -#define MSR_INTERCEPT_READ    1
>>> -#define MSR_INTERCEPT_WRITE   2
>>> -#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
>>> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
>>> -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
>>> -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
>>> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>>> +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>>> +#define svm_disable_intercept_for_msr(v, msr) \
>>> +    svm_clear_msr_intercept((v), (msr), MSR_RW)
>>> +#define svm_enable_intercept_for_msr(v, msr) \
>>> +    svm_set_intercept_msr((v), (msr), MSR_RW)
>>
>> Please avoid excess parentheses. Also could you clarify why you retain
>> these shorthands when you don't use them in the conversion that you're
>> doing (e.g. ())? Are you intending them to go
>> away down the road?
> 
> Ok.
> I did not understand the question. Which shorthands?

svm_disable_intercept_for_msr() and svm_enable_intercept_for_msr().

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
  2023-02-28 15:10       ` Jan Beulich
@ 2023-02-28 15:17         ` Xenia Ragiadakou
  2023-02-28 16:14           ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-28 15:17 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel


On 2/28/23 17:10, Jan Beulich wrote:
> On 28.02.2023 16:05, Xenia Ragiadakou wrote:
>> On 2/28/23 16:20, Jan Beulich wrote:
>>> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>>>> This change aims to render the control interface of MSR intercepts identical
>>>> between SVM and VMX code, so that the control of the MSR intercept in common
>>>> code can be done through an hvm_funcs callback.
>>>>
>>>> Create two new functions:
>>>> - svm_set_msr_intercept(), enables interception of read/write accesses to the
>>>>     corresponding MSR, by setting the corresponding read/write bits in the MSRPM
>>>>     based on the flags
>>>> - svm_clear_msr_intercept(), disables interception of read/write accesses to
>>>>     the corresponding MSR, by clearing the corresponding read/write bits in the
>>>>     MSRPM based on the flags
>>>
>>> In how far is VMX'es present model better than SVM's? They both have pros and
>>> cons, depending on the specific use. I'm not asking to do it the other way
>>> around (at least not yet), I'd merely like to understand why we're going to
>>> gain two new hooks (if I'm not mistaken) when we could also get away with
>>> just one.
>>
>> SVM approach always touches both read/write bits (either by setting or
>> clearing them). I thought that using the SVM approach on VMX could be
>> considered a functional change (because there are parts where VMX
>> assumes that a bit is already set or cleared and does not touch it).
> 
> As per my comment on the last patch a question is whether both actually
> need to become uniform. But if they do, then the new model should imo
> be followed right away, and that VMX'es simply leaving bits alone when
> already in known state.

But the SVM implementation does not assume. I can do it and remove the 
no functional change part.

> 
>>>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>>> @@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v);
>>>>    
>>>>    void setup_vmcb_dump(void);
>>>>    
>>>> -#define MSR_INTERCEPT_NONE    0
>>>> -#define MSR_INTERCEPT_READ    1
>>>> -#define MSR_INTERCEPT_WRITE   2
>>>> -#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
>>>> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
>>>> -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
>>>> -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
>>>> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>>>> +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>>>> +#define svm_disable_intercept_for_msr(v, msr) \
>>>> +    svm_clear_msr_intercept((v), (msr), MSR_RW)
>>>> +#define svm_enable_intercept_for_msr(v, msr) \
>>>> +    svm_set_intercept_msr((v), (msr), MSR_RW)
>>>
>>> Please avoid excess parentheses. Also could you clarify why you retain
>>> these shorthands when you don't use them in the conversion that you're
>>> doing (e.g. ())? Are you intending them to go
>>> away down the road?
>>
>> Ok.
>> I did not understand the question. Which shorthands?
> 
> svm_disable_intercept_for_msr() and svm_enable_intercept_for_msr().

Are you suggesting to replace svm_{en,dis}able_intercept_for_msr() with 
svm_{ser,clear}_msr_intercept()?  svm_disable_intercept_for_msr() is 
used in svm.c and vmcb.c.

-- 
Xenia


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
  2023-02-28 15:17         ` Xenia Ragiadakou
@ 2023-02-28 16:14           ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2023-02-28 16:14 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 28.02.2023 16:17, Xenia Ragiadakou wrote:
> On 2/28/23 17:10, Jan Beulich wrote:
>> On 28.02.2023 16:05, Xenia Ragiadakou wrote:
>>> On 2/28/23 16:20, Jan Beulich wrote:
>>>> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>>>>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>>>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>>>> @@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v);
>>>>>    
>>>>>    void setup_vmcb_dump(void);
>>>>>    
>>>>> -#define MSR_INTERCEPT_NONE    0
>>>>> -#define MSR_INTERCEPT_READ    1
>>>>> -#define MSR_INTERCEPT_WRITE   2
>>>>> -#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
>>>>> -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
>>>>> -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
>>>>> -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
>>>>> +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>>>>> +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags);
>>>>> +#define svm_disable_intercept_for_msr(v, msr) \
>>>>> +    svm_clear_msr_intercept((v), (msr), MSR_RW)
>>>>> +#define svm_enable_intercept_for_msr(v, msr) \
>>>>> +    svm_set_intercept_msr((v), (msr), MSR_RW)
>>>>
>>>> Please avoid excess parentheses. Also could you clarify why you retain
>>>> these shorthands when you don't use them in the conversion that you're
>>>> doing (e.g. ())? Are you intending them to go
>>>> away down the road?
>>>
>>> Ok.
>>> I did not understand the question. Which shorthands?
>>
>> svm_disable_intercept_for_msr() and svm_enable_intercept_for_msr().
> 
> Are you suggesting to replace svm_{en,dis}able_intercept_for_msr() with 
> svm_{ser,clear}_msr_intercept()?  svm_disable_intercept_for_msr() is 
> used in svm.c and vmcb.c.

I'm suggesting one of two possible routes leading to consistent use:
1) drop the shorthands
2) retain the shorthands and don't ever open-code them
Depending on which route we want to go either your code adjustments in
this regard are fine, and only a remark would want adding that they're
retained until remaining uses can be cleaned up, or you want to use
them in your changes wherever possible.

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()
  2023-02-28 14:58   ` Jan Beulich
@ 2023-02-28 20:14     ` Xenia Ragiadakou
  2023-03-01  9:14       ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Xenia Ragiadakou @ 2023-02-28 20:14 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, xen-devel


On 2/28/23 16:58, Jan Beulich wrote:
> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>> Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr
>> intercept in common vpmu code.
> 
> What is this going to buy us? All calls ...
> 
>> --- a/xen/arch/x86/cpu/vpmu_amd.c
>> +++ b/xen/arch/x86/cpu/vpmu_amd.c
>> @@ -165,9 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>>   
>>       for ( i = 0; i < num_counters; i++ )
>>       {
>> -        svm_clear_msr_intercept(v, counters[i], MSR_RW);
>> -        svm_set_msr_intercept(v, ctrls[i], MSR_W);
>> -        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
>> +        hvm_clear_msr_intercept(v, counters[i], MSR_RW);
>> +        hvm_set_msr_intercept(v, ctrls[i], MSR_W);
>> +        hvm_clear_msr_intercept(v, ctrls[i], MSR_R);
>>       }
>>   
>>       msr_bitmap_on(vpmu);
>> @@ -180,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>>   
>>       for ( i = 0; i < num_counters; i++ )
>>       {
>> -        svm_set_msr_intercept(v, counters[i], MSR_RW);
>> -        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
>> +        hvm_set_msr_intercept(v, counters[i], MSR_RW);
>> +        hvm_set_msr_intercept(v, ctrls[i], MSR_RW);
>>       }
>>   
>>       msr_bitmap_off(vpmu);
> 
> ... here will got to the SVM functions anyway, while ...
> 
>> --- a/xen/arch/x86/cpu/vpmu_intel.c
>> +++ b/xen/arch/x86/cpu/vpmu_intel.c
>> @@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
>>   
>>       /* Allow Read/Write PMU Counters MSR Directly. */
>>       for ( i = 0; i < fixed_pmc_cnt; i++ )
>> -        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>> +        hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>>   
>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>       {
>> -        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>> +        hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>>   
>>           if ( full_width_write )
>> -            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>> +            hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>>       }
>>   
>>       /* Allow Read PMU Non-global Controls Directly. */
>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>> -        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>> +        hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>>   
>> -    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>> -    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>> +    hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>> +    hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>>   }
>>   
>>   static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
>> @@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
>>       unsigned int i;
>>   
>>       for ( i = 0; i < fixed_pmc_cnt; i++ )
>> -        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>> +        hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>>   
>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>       {
>> -        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>> +        hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>>   
>>           if ( full_width_write )
>> -            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>> +            hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>>       }
>>   
>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>> -        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>> +        hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>>   
>> -    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>> -    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>> +    hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>> +    hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>>   }
>>   
>>   static inline void __core2_vpmu_save(struct vcpu *v)
> 
> ... all calls here will go to VMX'es. For making either vpmu_<vendor>.c
> build without that vendor's virtualization enabled, isn't it all it
> takes to have ...
> 
>> @@ -916,6 +932,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>>       ASSERT_UNREACHABLE();
>>   }
>>   
>> +static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr,
>> +                                         int flags)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +}
>> +
>> +static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr,
>> +                                           int flags)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +}
> 
> ... respective SVM and VMX stubs in place instead?

IMO it is more readable and they looked very good candidates for being 
abstracted because they are doing the same thing under both technologies.
Are you suggesting that their usage in common code should be discouraged 
and should not be exported via the hvm_funcs interface? Or just that the 
amount of changes cannot be justified.
IIUC Andrew also suggested to use hvm_funcs for msr intercept handling 
but I 'm not sure whether he had this or sth else in mind.

-- 
Xenia


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()
  2023-02-28 20:14     ` Xenia Ragiadakou
@ 2023-03-01  9:14       ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2023-03-01  9:14 UTC (permalink / raw)
  To: Xenia Ragiadakou, Andrew Cooper
  Cc: Roger Pau Monné, Wei Liu, Jun Nakajima, Kevin Tian, xen-devel

On 28.02.2023 21:14, Xenia Ragiadakou wrote:
> 
> On 2/28/23 16:58, Jan Beulich wrote:
>> On 27.02.2023 08:56, Xenia Ragiadakou wrote:
>>> Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr
>>> intercept in common vpmu code.
>>
>> What is this going to buy us? All calls ...
>>
>>> --- a/xen/arch/x86/cpu/vpmu_amd.c
>>> +++ b/xen/arch/x86/cpu/vpmu_amd.c
>>> @@ -165,9 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>>>   
>>>       for ( i = 0; i < num_counters; i++ )
>>>       {
>>> -        svm_clear_msr_intercept(v, counters[i], MSR_RW);
>>> -        svm_set_msr_intercept(v, ctrls[i], MSR_W);
>>> -        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
>>> +        hvm_clear_msr_intercept(v, counters[i], MSR_RW);
>>> +        hvm_set_msr_intercept(v, ctrls[i], MSR_W);
>>> +        hvm_clear_msr_intercept(v, ctrls[i], MSR_R);
>>>       }
>>>   
>>>       msr_bitmap_on(vpmu);
>>> @@ -180,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>>>   
>>>       for ( i = 0; i < num_counters; i++ )
>>>       {
>>> -        svm_set_msr_intercept(v, counters[i], MSR_RW);
>>> -        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
>>> +        hvm_set_msr_intercept(v, counters[i], MSR_RW);
>>> +        hvm_set_msr_intercept(v, ctrls[i], MSR_RW);
>>>       }
>>>   
>>>       msr_bitmap_off(vpmu);
>>
>> ... here will got to the SVM functions anyway, while ...
>>
>>> --- a/xen/arch/x86/cpu/vpmu_intel.c
>>> +++ b/xen/arch/x86/cpu/vpmu_intel.c
>>> @@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
>>>   
>>>       /* Allow Read/Write PMU Counters MSR Directly. */
>>>       for ( i = 0; i < fixed_pmc_cnt; i++ )
>>> -        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>>> +        hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>>>   
>>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>>       {
>>> -        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>>> +        hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>>>   
>>>           if ( full_width_write )
>>> -            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>>> +            hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>>>       }
>>>   
>>>       /* Allow Read PMU Non-global Controls Directly. */
>>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>> -        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>>> +        hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>>>   
>>> -    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>>> -    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>>> +    hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>>> +    hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>>>   }
>>>   
>>>   static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
>>> @@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
>>>       unsigned int i;
>>>   
>>>       for ( i = 0; i < fixed_pmc_cnt; i++ )
>>> -        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>>> +        hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
>>>   
>>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>>       {
>>> -        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>>> +        hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
>>>   
>>>           if ( full_width_write )
>>> -            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>>> +            hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
>>>       }
>>>   
>>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>> -        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>>> +        hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
>>>   
>>> -    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>>> -    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>>> +    hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
>>> +    hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
>>>   }
>>>   
>>>   static inline void __core2_vpmu_save(struct vcpu *v)
>>
>> ... all calls here will go to VMX'es. For making either vpmu_<vendor>.c
>> build without that vendor's virtualization enabled, isn't it all it
>> takes to have ...
>>
>>> @@ -916,6 +932,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>>>       ASSERT_UNREACHABLE();
>>>   }
>>>   
>>> +static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr,
>>> +                                         int flags)
>>> +{
>>> +    ASSERT_UNREACHABLE();
>>> +}
>>> +
>>> +static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr,
>>> +                                           int flags)
>>> +{
>>> +    ASSERT_UNREACHABLE();
>>> +}
>>
>> ... respective SVM and VMX stubs in place instead?
> 
> IMO it is more readable and they looked very good candidates for being 
> abstracted because they are doing the same thing under both technologies.
> Are you suggesting that their usage in common code should be discouraged 
> and should not be exported via the hvm_funcs interface? Or just that the 
> amount of changes cannot be justified.

The amount of changes is okay if the route taken is deemed useful going
forward. For now I view vPMU as too special to provide sufficient
justification for yet another pair of hook functions. The more that - as
indicated before - every single call site will only ever call one of the
two possible callees.

> IIUC Andrew also suggested to use hvm_funcs for msr intercept handling 
> but I 'm not sure whether he had this or sth else in mind.

Andrew, any chance you could outline your thinking / further plans here?
In particular, do you have other future use sites in mind that would
live outside of vendor-specific code?

Jan


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-03-01  9:15 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-27  7:56 [PATCH 0/4] hvm: add hvm_funcs hooks for msr intercept handling Xenia Ragiadakou
2023-02-27  7:56 ` [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise Xenia Ragiadakou
2023-02-28 13:54   ` Jan Beulich
2023-02-27  7:56 ` [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Xenia Ragiadakou
2023-02-28 14:20   ` Jan Beulich
2023-02-28 15:05     ` Xenia Ragiadakou
2023-02-28 15:10       ` Jan Beulich
2023-02-28 15:17         ` Xenia Ragiadakou
2023-02-28 16:14           ` Jan Beulich
2023-02-27  7:56 ` [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Xenia Ragiadakou
2023-02-28 14:31   ` Jan Beulich
2023-02-28 14:34     ` Jan Beulich
2023-02-28 15:07       ` Xenia Ragiadakou
2023-02-27  7:56 ` [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Xenia Ragiadakou
2023-02-28 14:58   ` Jan Beulich
2023-02-28 20:14     ` Xenia Ragiadakou
2023-03-01  9:14       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.