All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/8] x86: switch default MSR behavior
@ 2020-09-01 10:54 Roger Pau Monne
  2020-09-01 10:54 ` [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
                   ` (7 more replies)
  0 siblings, 8 replies; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jun Nakajima, Kevin Tian, Jan Beulich,
	Andrew Cooper, Wei Liu

Hello,

The current series attempts to change the current MSR default handling
behavior, which is to silently drop writes to writable MSRs, and allow
reading any MSR not explicitly handled.

After this series access to MSRs not explicitly handled will trigger a
#GP fault. I've tested this series with osstest and it doesn't introduce
any regression, at least on the boxes selected for testing:

http://logs.test-lab.xenproject.org/osstest/logs/153391/

Thanks, Roger.

Andrew Cooper (2):
  x86/hvm: Disallow access to unknown MSRs
  x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr()

Roger Pau Monne (6):
  x86/vmx: handle writes to MISC_ENABLE MSR
  x86/svm: silently drop writes to SYSCFG and related MSRs
  x86/msr: explicitly handle AMD DE_CFG
  x86/svm: handle BU_CFG and BU_CFG2 with cases
  x86/pv: allow reading FEATURE_CONTROL MSR
  x86/pv: disallow access to unknown MSRs

 xen/arch/x86/hvm/svm/svm.c     | 59 +++++++++++++++++++---------
 xen/arch/x86/hvm/vmx/vmx.c     | 31 ++++++---------
 xen/arch/x86/msr.c             | 72 +++++++++++++---------------------
 xen/arch/x86/pv/emul-priv-op.c | 18 +++++----
 4 files changed, 89 insertions(+), 91 deletions(-)

-- 
2.28.0



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-04  8:34   ` Jan Beulich
  2020-09-07  3:25   ` Tian, Kevin
  2020-09-01 10:54 ` [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jun Nakajima, Kevin Tian, Jan Beulich,
	Andrew Cooper, Wei Liu

Such handling consist in checking that no bits have been changed from
the read value, if that's the case silently drop the write, otherwise
inject a fault.

At least Windows guests will expect to write to the MISC_ENABLE MSR
with the same value that's been read from it.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a0d58ffbe2..4717e50d4a 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3163,7 +3163,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 
     switch ( msr )
     {
-        uint64_t rsvd;
+        uint64_t rsvd, tmp;
 
     case MSR_IA32_SYSENTER_CS:
         __vmwrite(GUEST_SYSENTER_CS, msr_content);
@@ -3301,6 +3301,13 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         /* None of these MSRs are writeable. */
         goto gp_fault;
 
+    case MSR_IA32_MISC_ENABLE:
+        /* Silently drop writes that don't change the reported value. */
+        if ( vmx_msr_read_intercept(msr, &tmp) != X86EMUL_OKAY ||
+             tmp != msr_content )
+            goto gp_fault;
+        break;
+
     case MSR_P6_PERFCTR(0)...MSR_P6_PERFCTR(7):
     case MSR_P6_EVNTSEL(0)...MSR_P6_EVNTSEL(7):
     case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
  2020-09-01 10:54 ` [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-04  8:36   ` Jan Beulich
  2020-09-01 10:54 ` [PATCH v3 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

The SYSCFG, TOP_MEM1 and TOP_MEM2 MSRs are currently exposed to guests
and writes are silently discarded. Make this explicit in the SVM code
now, and just return default constant values when attempting to read
any of the MSRs, while continuing to silently drop writes.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Return 0 from SYSCFG.
 - Merge switch cases.

Changes sincxe v1:
 - Return MtrrFixDramEn in MSR_K8_SYSCFG.
---
 xen/arch/x86/hvm/svm/svm.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index ca3bbfcbb3..af584ff5d1 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1917,6 +1917,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             goto gpf;
         break;
 
+    case MSR_K8_SYSCFG:
+    case MSR_K8_TOP_MEM1:
+    case MSR_K8_TOP_MEM2:
     case MSR_K8_VM_CR:
         *msr_content = 0;
         break;
@@ -2094,6 +2097,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
             goto gpf;
         break;
 
+    case MSR_K8_TOP_MEM1:
+    case MSR_K8_TOP_MEM2:
+    case MSR_K8_SYSCFG:
     case MSR_K8_VM_CR:
         /* ignore write. handle all bits as read-only. */
         break;
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
  2020-09-01 10:54 ` [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
  2020-09-01 10:54 ` [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-02 20:49   ` Andrew Cooper
  2020-09-01 10:54 ` [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases Roger Pau Monne
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Report LFENCE_SERIALISE unconditionally for DE_CFG on AMD hardware and
silently drop writes.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Drop the bot_cpu checks and don't attempt to read the MSR, just
   return LFENCE_SERIALISE unconditionally.
 - Add a comment about OpenBSD panicking if writing to the MSR
   triggers a #GP.

Changes since v1:
 - New in this version.
---
 xen/arch/x86/msr.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index a478b91f23..e84107ac7b 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -292,6 +292,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = msrs->tsc_aux;
         break;
 
+    case MSR_AMD64_DE_CFG:
+        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+            goto gp_fault;
+        *val = AMD64_DE_CFG_LFENCE_SERIALISE;
+        break;
+
     case MSR_AMD64_DR0_ADDRESS_MASK:
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !cp->extd.dbext )
@@ -517,6 +523,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsr_tsc_aux(val);
         break;
 
+    case MSR_AMD64_DE_CFG:
+        /*
+         * OpenBSD 6.7 will panic if writing to DE_CFG triggers a #GP:
+         * https://www.illumos.org/issues/12998
+         */
+        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+            goto gp_fault;
+        break;
+
     case MSR_AMD64_DR0_ADDRESS_MASK:
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !cp->extd.dbext || val != (uint32_t)val )
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (2 preceding siblings ...)
  2020-09-01 10:54 ` [PATCH v3 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-02 21:02   ` Andrew Cooper
  2020-09-01 10:54 ` [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Move the special handling of reads to it's own switch case, and also
add support for BU_CFG2. On the write side ignore writes if the MSR is
readable, otherwise return a #GP.

This is in preparation for changing the default MSR read/write
behavior, which will instead return #GP on not explicitly handled
cases.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Move the handling of reads to it's own case.
 - Drop writes if the MSR is readable, else return a #GP.

Changes since v1:
 - New in this version.
---
 xen/arch/x86/hvm/svm/svm.c | 43 ++++++++++++++++++++++++++------------
 1 file changed, 30 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index af584ff5d1..0e43154c7e 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1864,6 +1864,30 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = 1ULL << 61; /* MC4_MISC.Locked */
         break;
 
+    case MSR_F10_BU_CFG:
+        if ( !rdmsr_safe(msr, *msr_content) )
+            break;
+
+        if ( boot_cpu_data.x86 == 0xf )
+        {
+            /*
+             * Win2k8 x64 reads this MSR on revF chips, where it wasn't
+             * publically available; it uses a magic constant in %rdi as a
+             * password, which we don't have in rdmsr_safe().  Since we'll
+             * ignore the later writes, just use a plausible value here (the
+             * reset value from rev10h chips) if the real CPU didn't provide
+             * one.
+             */
+            *msr_content = 0x0000000010200020ull;
+            break;
+        }
+        goto gpf;
+
+    case MSR_F10_BU_CFG2:
+        if ( rdmsr_safe(msr, *msr_content) )
+            goto gpf;
+        break;
+
     case MSR_IA32_EBC_FREQUENCY_ID:
         /*
          * This Intel-only register may be accessed if this HVM guest
@@ -1942,19 +1966,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     default:
         if ( rdmsr_safe(msr, *msr_content) == 0 )
             break;
-
-        if ( boot_cpu_data.x86 == 0xf && msr == MSR_F10_BU_CFG )
-        {
-            /* Win2k8 x64 reads this MSR on revF chips, where it
-             * wasn't publically available; it uses a magic constant
-             * in %rdi as a password, which we don't have in
-             * rdmsr_safe().  Since we'll ignore the later writes,
-             * just use a plausible value here (the reset value from
-             * rev10h chips) if the real CPU didn't provide one. */
-            *msr_content = 0x0000000010200020ull;
-            break;
-        }
-
         goto gpf;
     }
 
@@ -2110,6 +2121,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         nsvm->ns_msr_hsavepa = msr_content;
         break;
 
+    case MSR_F10_BU_CFG:
+    case MSR_F10_BU_CFG2:
+        if ( rdmsr_safe(msr, msr_content) )
+            goto gpf;
+        break;
+
     case MSR_AMD64_TSC_RATIO:
         if ( msr_content & TSC_RATIO_RSVD_BITS )
             goto gpf;
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (3 preceding siblings ...)
  2020-09-01 10:54 ` [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-02 20:56   ` Andrew Cooper
  2020-09-01 10:54 ` [PATCH v3 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jun Nakajima, Kevin Tian, Jan Beulich,
	Andrew Cooper, Wei Liu

Linux PV guests will attempt to read the FEATURE_CONTROL MSR, so move
the handling done in VMX code into guest_rdmsr as it can be shared
between PV and HVM guests that way.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes from v1:
 - Move the VMX implementation into guest_rdmsr.
---
 xen/arch/x86/hvm/vmx/vmx.c |  8 +-------
 xen/arch/x86/msr.c         | 13 +++++++++++++
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4717e50d4a..f6657af923 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2980,13 +2980,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_IA32_DEBUGCTLMSR:
         __vmread(GUEST_IA32_DEBUGCTL, msr_content);
         break;
-    case MSR_IA32_FEATURE_CONTROL:
-        *msr_content = IA32_FEATURE_CONTROL_LOCK;
-        if ( vmce_has_lmce(curr) )
-            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
-        if ( nestedhvm_enabled(curr->domain) )
-            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
-        break;
+
     case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
         if ( !nvmx_msr_read_intercept(msr, msr_content) )
             goto gp_fault;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index e84107ac7b..cc2f111a90 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -25,6 +25,7 @@
 #include <xen/sched.h>
 
 #include <asm/debugreg.h>
+#include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/viridian.h>
 #include <asm/msr.h>
 #include <asm/setup.h>
@@ -197,6 +198,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         /* Not offered to guests. */
         goto gp_fault;
 
+    case MSR_IA32_FEATURE_CONTROL:
+        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
+            goto gp_fault;
+
+        *val = IA32_FEATURE_CONTROL_LOCK;
+        if ( vmce_has_lmce(v) )
+            *val |= IA32_FEATURE_CONTROL_LMCE_ON;
+        if ( nestedhvm_enabled(d) )
+            *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
+        break;
+
+
     case MSR_IA32_PLATFORM_ID:
         if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
              !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 6/8] x86/pv: disallow access to unknown MSRs
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (4 preceding siblings ...)
  2020-09-01 10:54 ` [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-01 10:54 ` [PATCH v3 7/8] x86/hvm: Disallow " Roger Pau Monne
  2020-09-01 10:54 ` [PATCH v3 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne
  7 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Change the catch-all behavior for MSR not explicitly handled. Instead
of allow full read-access to the MSR space and silently dropping
writes return an exception when the MSR is not explicitly handled.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes since v2:
 - Added missing 0x prefix.
---
 xen/arch/x86/pv/emul-priv-op.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index bcc1188f6a..2d9953f5b4 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -972,9 +972,10 @@ static int read_msr(unsigned int reg, uint64_t *val,
         }
         /* fall through */
     default:
+        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", reg);
+        break;
+
     normal:
-        /* Everyone can read the MSR space. */
-        /* gdprintk(XENLOG_WARNING, "Domain attempted RDMSR %08x\n", reg); */
         if ( rdmsr_safe(reg, *val) )
             break;
         return X86EMUL_OKAY;
@@ -1141,14 +1142,15 @@ static int write_msr(unsigned int reg, uint64_t val,
         }
         /* fall through */
     default:
-        if ( rdmsr_safe(reg, temp) )
-            break;
+        gdprintk(XENLOG_WARNING,
+                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
+                 reg, val);
+        break;
 
-        if ( val != temp )
     invalid:
-            gdprintk(XENLOG_WARNING,
-                     "Domain attempted WRMSR %08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
-                     reg, temp, val);
+        gdprintk(XENLOG_WARNING,
+                 "Domain attempted WRMSR 0x%08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
+                 reg, temp, val);
         return X86EMUL_OKAY;
     }
 
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (5 preceding siblings ...)
  2020-09-01 10:54 ` [PATCH v3 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  2020-09-04  8:53   ` Jan Beulich
  2020-09-07  3:31   ` Tian, Kevin
  2020-09-01 10:54 ` [PATCH v3 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne
  7 siblings, 2 replies; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

From: Andrew Cooper <andrew.cooper3@citrix.com>

Change the catch-all behavior for MSR not explicitly handled. Instead
of allow full read-access to the MSR space and silently dropping
writes return an exception when the MSR is not explicitly handled.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
[remove rdmsr_safe from default case in svm_msr_read_intercept]
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Fold chunk to remove explicit write handling of VMX MSRs just to
   #GP.
 - Remove catch-all rdmsr_safe in svm_msr_read_intercept.
---
 xen/arch/x86/hvm/svm/svm.c | 10 ++++------
 xen/arch/x86/hvm/vmx/vmx.c | 16 ++++------------
 2 files changed, 8 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 0e43154c7e..66b22efdab 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1964,8 +1964,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         break;
 
     default:
-        if ( rdmsr_safe(msr, *msr_content) == 0 )
-            break;
+        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
         goto gpf;
     }
 
@@ -2150,10 +2149,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         break;
 
     default:
-        /* Match up with the RDMSR side; ultimately this should go away. */
-        if ( rdmsr_safe(msr, msr_content) == 0 )
-            break;
-
+        gdprintk(XENLOG_WARNING,
+                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
+                 msr, msr_content);
         goto gpf;
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6657af923..9cc9d81c41 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3015,9 +3015,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             break;
         }
 
-        if ( rdmsr_safe(msr, *msr_content) == 0 )
-            break;
-
+        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
         goto gp_fault;
     }
 
@@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
         break;
 
-    case MSR_IA32_FEATURE_CONTROL:
-    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
-        /* None of these MSRs are writeable. */
-        goto gp_fault;
-
     case MSR_IA32_MISC_ENABLE:
         /* Silently drop writes that don't change the reported value. */
         if ( vmx_msr_read_intercept(msr, &tmp) != X86EMUL_OKAY ||
@@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
              is_last_branch_msr(msr) )
             break;
 
-        /* Match up with the RDMSR side; ultimately this should go away. */
-        if ( rdmsr_safe(msr, msr_content) == 0 )
-            break;
-
+        gdprintk(XENLOG_WARNING,
+                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
+                 msr, msr_content);
         goto gp_fault;
     }
 
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr()
  2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (6 preceding siblings ...)
  2020-09-01 10:54 ` [PATCH v3 7/8] x86/hvm: Disallow " Roger Pau Monne
@ 2020-09-01 10:54 ` Roger Pau Monne
  7 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monne @ 2020-09-01 10:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu

From: Andrew Cooper <andrew.cooper3@citrix.com>

Now that the main PV/HVM MSR handlers raise #GP for all unknown MSRs, there is
no need to special case these MSRs any more.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - New in this version.
---
 xen/arch/x86/msr.c | 46 ----------------------------------------------
 1 file changed, 46 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index cc2f111a90..ab11e3b73c 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -175,29 +175,6 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 
     switch ( msr )
     {
-    case MSR_AMD_PATCHLOADER:
-    case MSR_IA32_UCODE_WRITE:
-    case MSR_PRED_CMD:
-    case MSR_FLUSH_CMD:
-        /* Write-only */
-    case MSR_TEST_CTRL:
-    case MSR_CORE_CAPABILITIES:
-    case MSR_TSX_FORCE_ABORT:
-    case MSR_TSX_CTRL:
-    case MSR_MCU_OPT_CTRL:
-    case MSR_RTIT_OUTPUT_BASE ... MSR_RTIT_ADDR_B(7):
-    case MSR_U_CET:
-    case MSR_S_CET:
-    case MSR_PL0_SSP ... MSR_INTERRUPT_SSP_TABLE:
-    case MSR_AMD64_LWP_CFG:
-    case MSR_AMD64_LWP_CBADDR:
-    case MSR_PPIN_CTL:
-    case MSR_PPIN:
-    case MSR_AMD_PPIN_CTL:
-    case MSR_AMD_PPIN:
-        /* Not offered to guests. */
-        goto gp_fault;
-
     case MSR_IA32_FEATURE_CONTROL:
         if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
             goto gp_fault;
@@ -365,29 +342,6 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     {
         uint64_t rsvd;
 
-    case MSR_IA32_PLATFORM_ID:
-    case MSR_CORE_CAPABILITIES:
-    case MSR_INTEL_CORE_THREAD_COUNT:
-    case MSR_INTEL_PLATFORM_INFO:
-    case MSR_ARCH_CAPABILITIES:
-        /* Read-only */
-    case MSR_TEST_CTRL:
-    case MSR_TSX_FORCE_ABORT:
-    case MSR_TSX_CTRL:
-    case MSR_MCU_OPT_CTRL:
-    case MSR_RTIT_OUTPUT_BASE ... MSR_RTIT_ADDR_B(7):
-    case MSR_U_CET:
-    case MSR_S_CET:
-    case MSR_PL0_SSP ... MSR_INTERRUPT_SSP_TABLE:
-    case MSR_AMD64_LWP_CFG:
-    case MSR_AMD64_LWP_CBADDR:
-    case MSR_PPIN_CTL:
-    case MSR_PPIN:
-    case MSR_AMD_PPIN_CTL:
-    case MSR_AMD_PPIN:
-        /* Not offered to guests. */
-        goto gp_fault;
-
     case MSR_AMD_PATCHLEVEL:
         BUILD_BUG_ON(MSR_IA32_UCODE_REV != MSR_AMD_PATCHLEVEL);
         /*
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-09-01 10:54 ` [PATCH v3 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
@ 2020-09-02 20:49   ` Andrew Cooper
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Cooper @ 2020-09-02 20:49 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Jan Beulich, Wei Liu

On 01/09/2020 11:54, Roger Pau Monne wrote:
> @@ -517,6 +523,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>              wrmsr_tsc_aux(val);
>          break;
>  
> +    case MSR_AMD64_DE_CFG:
> +        /*
> +         * OpenBSD 6.7 will panic if writing to DE_CFG triggers a #GP:
> +         * https://www.illumos.org/issues/12998

"Drop writes", or some suitable equivalent, so it is clear what action
Xen is trying to take in response to the bug.

Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> +         */
> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +            goto gp_fault;
> +        break;
> +
>      case MSR_AMD64_DR0_ADDRESS_MASK:
>      case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
>          if ( !cp->extd.dbext || val != (uint32_t)val )



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-09-01 10:54 ` [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
@ 2020-09-02 20:56   ` Andrew Cooper
  2020-09-03 13:33     ` Roger Pau Monné
  0 siblings, 1 reply; 28+ messages in thread
From: Andrew Cooper @ 2020-09-02 20:56 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Jun Nakajima, Kevin Tian, Jan Beulich, Wei Liu

On 01/09/2020 11:54, Roger Pau Monne wrote:
> Linux PV guests will attempt to read the FEATURE_CONTROL MSR, so move
> the handling done in VMX code into guest_rdmsr as it can be shared
> between PV and HVM guests that way.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes from v1:
>  - Move the VMX implementation into guest_rdmsr.
> ---
>  xen/arch/x86/hvm/vmx/vmx.c |  8 +-------
>  xen/arch/x86/msr.c         | 13 +++++++++++++
>  2 files changed, 14 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 4717e50d4a..f6657af923 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2980,13 +2980,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>      case MSR_IA32_DEBUGCTLMSR:
>          __vmread(GUEST_IA32_DEBUGCTL, msr_content);
>          break;
> -    case MSR_IA32_FEATURE_CONTROL:
> -        *msr_content = IA32_FEATURE_CONTROL_LOCK;
> -        if ( vmce_has_lmce(curr) )
> -            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
> -        if ( nestedhvm_enabled(curr->domain) )
> -            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> -        break;
> +
>      case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
>          if ( !nvmx_msr_read_intercept(msr, msr_content) )
>              goto gp_fault;
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index e84107ac7b..cc2f111a90 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -25,6 +25,7 @@
>  #include <xen/sched.h>
>  
>  #include <asm/debugreg.h>
> +#include <asm/hvm/nestedhvm.h>
>  #include <asm/hvm/viridian.h>
>  #include <asm/msr.h>
>  #include <asm/setup.h>
> @@ -197,6 +198,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>          /* Not offered to guests. */
>          goto gp_fault;
>  
> +    case MSR_IA32_FEATURE_CONTROL:
> +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
> +            goto gp_fault;

The MSR is available if:

"If any one enumeration
condition for defined bit
field position greater than
bit 0 holds."

which for us means cp->basic.vmx || cp->feat.lmce at the moment, with
perhaps some smx/sgx in the future.

In particular, this MSR is available on Centaur and Shanghai, which
implement VT-x.

~Andrew

> +
> +        *val = IA32_FEATURE_CONTROL_LOCK;
> +        if ( vmce_has_lmce(v) )
> +            *val |= IA32_FEATURE_CONTROL_LMCE_ON;
> +        if ( nestedhvm_enabled(d) )
> +            *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> +        break;
> +
> +
>      case MSR_IA32_PLATFORM_ID:
>          if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
>               !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases
  2020-09-01 10:54 ` [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases Roger Pau Monne
@ 2020-09-02 21:02   ` Andrew Cooper
  2020-09-03  8:15     ` Roger Pau Monné
  2020-09-03  8:29     ` Jan Beulich
  0 siblings, 2 replies; 28+ messages in thread
From: Andrew Cooper @ 2020-09-02 21:02 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Jan Beulich, Wei Liu

On 01/09/2020 11:54, Roger Pau Monne wrote:
> @@ -1942,19 +1966,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>      default:
>          if ( rdmsr_safe(msr, *msr_content) == 0 )
>              break;
> -
> -        if ( boot_cpu_data.x86 == 0xf && msr == MSR_F10_BU_CFG )
> -        {
> -            /* Win2k8 x64 reads this MSR on revF chips, where it
> -             * wasn't publically available; it uses a magic constant
> -             * in %rdi as a password, which we don't have in
> -             * rdmsr_safe().  Since we'll ignore the later writes,
> -             * just use a plausible value here (the reset value from
> -             * rev10h chips) if the real CPU didn't provide one. */
> -            *msr_content = 0x0000000010200020ull;
> -            break;
> -        }
> -
>          goto gpf;
>      }
>  
> @@ -2110,6 +2121,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          nsvm->ns_msr_hsavepa = msr_content;
>          break;
>  
> +    case MSR_F10_BU_CFG:
> +    case MSR_F10_BU_CFG2:
> +        if ( rdmsr_safe(msr, msr_content) )
> +            goto gpf;

The comment you've moved depends on this not having this behaviour, as
you'll now hand #GP back to Win2k8 on its write.

Honestly, given that how obsolete both Win2k8 and K10's are, I'm
seriously tempted to suggest dropping this workaround entirely.

~Andrew

> +        break;
> +
>      case MSR_AMD64_TSC_RATIO:
>          if ( msr_content & TSC_RATIO_RSVD_BITS )
>              goto gpf;



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases
  2020-09-02 21:02   ` Andrew Cooper
@ 2020-09-03  8:15     ` Roger Pau Monné
  2020-09-04  8:39       ` Jan Beulich
  2020-09-03  8:29     ` Jan Beulich
  1 sibling, 1 reply; 28+ messages in thread
From: Roger Pau Monné @ 2020-09-03  8:15 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Jan Beulich, Wei Liu

On Wed, Sep 02, 2020 at 10:02:33PM +0100, Andrew Cooper wrote:
> On 01/09/2020 11:54, Roger Pau Monne wrote:
> > @@ -1942,19 +1966,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >      default:
> >          if ( rdmsr_safe(msr, *msr_content) == 0 )
> >              break;
> > -
> > -        if ( boot_cpu_data.x86 == 0xf && msr == MSR_F10_BU_CFG )
> > -        {
> > -            /* Win2k8 x64 reads this MSR on revF chips, where it
> > -             * wasn't publically available; it uses a magic constant
> > -             * in %rdi as a password, which we don't have in
> > -             * rdmsr_safe().  Since we'll ignore the later writes,
> > -             * just use a plausible value here (the reset value from
> > -             * rev10h chips) if the real CPU didn't provide one. */
> > -            *msr_content = 0x0000000010200020ull;
> > -            break;
> > -        }
> > -
> >          goto gpf;
> >      }
> >  
> > @@ -2110,6 +2121,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >          nsvm->ns_msr_hsavepa = msr_content;
> >          break;
> >  
> > +    case MSR_F10_BU_CFG:
> > +    case MSR_F10_BU_CFG2:
> > +        if ( rdmsr_safe(msr, msr_content) )
> > +            goto gpf;
> 
> The comment you've moved depends on this not having this behaviour, as
> you'll now hand #GP back to Win2k8 on its write.

Oh, unless I'm very confused the comment was already wrong, since
svm_msr_write_intercept previous to this patch would return a #GP when
trying to write to BU_CFG if the underlying MSR is not readable, so
this just keeps the previous behavior.

Looking at the original commit (338db98dd8d) it seems the intention
was to only handle reads and let writes return a #GP?

Maybe Win2k8 would only read the MSR?

> Honestly, given that how obsolete both Win2k8 and K10's are, I'm
> seriously tempted to suggest dropping this workaround entirely.

I'm fine to just drop the workaround, but it seems quite trivial to
just keep it as is (reads returns a know value, writes #GP). I can
adjust the comment to be clearer.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases
  2020-09-02 21:02   ` Andrew Cooper
  2020-09-03  8:15     ` Roger Pau Monné
@ 2020-09-03  8:29     ` Jan Beulich
  1 sibling, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2020-09-03  8:29 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monne, xen-devel, Wei Liu

On 02.09.2020 23:02, Andrew Cooper wrote:
> On 01/09/2020 11:54, Roger Pau Monne wrote:
>> @@ -1942,19 +1966,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>      default:
>>          if ( rdmsr_safe(msr, *msr_content) == 0 )
>>              break;
>> -
>> -        if ( boot_cpu_data.x86 == 0xf && msr == MSR_F10_BU_CFG )
>> -        {
>> -            /* Win2k8 x64 reads this MSR on revF chips, where it
>> -             * wasn't publically available; it uses a magic constant
>> -             * in %rdi as a password, which we don't have in
>> -             * rdmsr_safe().  Since we'll ignore the later writes,
>> -             * just use a plausible value here (the reset value from
>> -             * rev10h chips) if the real CPU didn't provide one. */
>> -            *msr_content = 0x0000000010200020ull;
>> -            break;
>> -        }
>> -
>>          goto gpf;
>>      }
>>  
>> @@ -2110,6 +2121,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>          nsvm->ns_msr_hsavepa = msr_content;
>>          break;
>>  
>> +    case MSR_F10_BU_CFG:
>> +    case MSR_F10_BU_CFG2:
>> +        if ( rdmsr_safe(msr, msr_content) )
>> +            goto gpf;
> 
> The comment you've moved depends on this not having this behaviour, as
> you'll now hand #GP back to Win2k8 on its write.
> 
> Honestly, given that how obsolete both Win2k8 and K10's are, I'm
> seriously tempted to suggest dropping this workaround entirely.

Let's not (yet). I'm told we (as a company) still support such guests.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-09-02 20:56   ` Andrew Cooper
@ 2020-09-03 13:33     ` Roger Pau Monné
  2020-09-03 14:06       ` Andrew Cooper
  0 siblings, 1 reply; 28+ messages in thread
From: Roger Pau Monné @ 2020-09-03 13:33 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Jun Nakajima, Kevin Tian, Jan Beulich, Wei Liu

On Wed, Sep 02, 2020 at 09:56:48PM +0100, Andrew Cooper wrote:
> On 01/09/2020 11:54, Roger Pau Monne wrote:
> > Linux PV guests will attempt to read the FEATURE_CONTROL MSR, so move
> > the handling done in VMX code into guest_rdmsr as it can be shared
> > between PV and HVM guests that way.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes from v1:
> >  - Move the VMX implementation into guest_rdmsr.
> > ---
> >  xen/arch/x86/hvm/vmx/vmx.c |  8 +-------
> >  xen/arch/x86/msr.c         | 13 +++++++++++++
> >  2 files changed, 14 insertions(+), 7 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> > index 4717e50d4a..f6657af923 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -2980,13 +2980,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >      case MSR_IA32_DEBUGCTLMSR:
> >          __vmread(GUEST_IA32_DEBUGCTL, msr_content);
> >          break;
> > -    case MSR_IA32_FEATURE_CONTROL:
> > -        *msr_content = IA32_FEATURE_CONTROL_LOCK;
> > -        if ( vmce_has_lmce(curr) )
> > -            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
> > -        if ( nestedhvm_enabled(curr->domain) )
> > -            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> > -        break;
> > +
> >      case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
> >          if ( !nvmx_msr_read_intercept(msr, msr_content) )
> >              goto gp_fault;
> > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> > index e84107ac7b..cc2f111a90 100644
> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -25,6 +25,7 @@
> >  #include <xen/sched.h>
> >  
> >  #include <asm/debugreg.h>
> > +#include <asm/hvm/nestedhvm.h>
> >  #include <asm/hvm/viridian.h>
> >  #include <asm/msr.h>
> >  #include <asm/setup.h>
> > @@ -197,6 +198,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >          /* Not offered to guests. */
> >          goto gp_fault;
> >  
> > +    case MSR_IA32_FEATURE_CONTROL:
> > +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
> > +            goto gp_fault;
> 
> The MSR is available if:
> 
> "If any one enumeration
> condition for defined bit
> field position greater than
> bit 0 holds."
> 
> which for us means cp->basic.vmx || cp->feat.lmce at the moment, with
> perhaps some smx/sgx in the future.

I don't think there's a lmce cpu bit (seems to be signaled in
IA32_MCG_CAP[27] = 1), maybe I should just use vmce_has_lmce?

if ( !cp->basic.vmx || !vmce_has_lmce(v) )
    goto gp_fault;

Is it fine however to return a #GP if we don't provide any of those
features to guests, but the underlying CPU does support them?

That seems to be slightly different from how we currently handle reads
to FEATURE_CONTROL, since it will never fault on Intel (or compliant),
even if we just return with bit 0 set alone. The new approach seems
closer to what real hardware would do.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-09-03 13:33     ` Roger Pau Monné
@ 2020-09-03 14:06       ` Andrew Cooper
  2020-09-03 14:10         ` Roger Pau Monné
  0 siblings, 1 reply; 28+ messages in thread
From: Andrew Cooper @ 2020-09-03 14:06 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Jun Nakajima, Kevin Tian, Jan Beulich, Wei Liu

On 03/09/2020 14:33, Roger Pau Monné wrote:
> On Wed, Sep 02, 2020 at 09:56:48PM +0100, Andrew Cooper wrote:
>> On 01/09/2020 11:54, Roger Pau Monne wrote:
>>> Linux PV guests will attempt to read the FEATURE_CONTROL MSR, so move
>>> the handling done in VMX code into guest_rdmsr as it can be shared
>>> between PV and HVM guests that way.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>> Changes from v1:
>>>  - Move the VMX implementation into guest_rdmsr.
>>> ---
>>>  xen/arch/x86/hvm/vmx/vmx.c |  8 +-------
>>>  xen/arch/x86/msr.c         | 13 +++++++++++++
>>>  2 files changed, 14 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>>> index 4717e50d4a..f6657af923 100644
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -2980,13 +2980,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>>      case MSR_IA32_DEBUGCTLMSR:
>>>          __vmread(GUEST_IA32_DEBUGCTL, msr_content);
>>>          break;
>>> -    case MSR_IA32_FEATURE_CONTROL:
>>> -        *msr_content = IA32_FEATURE_CONTROL_LOCK;
>>> -        if ( vmce_has_lmce(curr) )
>>> -            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
>>> -        if ( nestedhvm_enabled(curr->domain) )
>>> -            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
>>> -        break;
>>> +
>>>      case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
>>>          if ( !nvmx_msr_read_intercept(msr, msr_content) )
>>>              goto gp_fault;
>>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
>>> index e84107ac7b..cc2f111a90 100644
>>> --- a/xen/arch/x86/msr.c
>>> +++ b/xen/arch/x86/msr.c
>>> @@ -25,6 +25,7 @@
>>>  #include <xen/sched.h>
>>>  
>>>  #include <asm/debugreg.h>
>>> +#include <asm/hvm/nestedhvm.h>
>>>  #include <asm/hvm/viridian.h>
>>>  #include <asm/msr.h>
>>>  #include <asm/setup.h>
>>> @@ -197,6 +198,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>>          /* Not offered to guests. */
>>>          goto gp_fault;
>>>  
>>> +    case MSR_IA32_FEATURE_CONTROL:
>>> +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
>>> +            goto gp_fault;
>> The MSR is available if:
>>
>> "If any one enumeration
>> condition for defined bit
>> field position greater than
>> bit 0 holds."
>>
>> which for us means cp->basic.vmx || cp->feat.lmce at the moment, with
>> perhaps some smx/sgx in the future.
> I don't think there's a lmce cpu bit (seems to be signaled in
> IA32_MCG_CAP[27] = 1), maybe I should just use vmce_has_lmce?
>
> if ( !cp->basic.vmx || !vmce_has_lmce(v) )
>     goto gp_fault;

Ah yes, sorry.

Eventually that will be mp->mcg_cap.lmce, but use the predicate for now.

> Is it fine however to return a #GP if we don't provide any of those
> features to guests, but the underlying CPU does support them?

Yes.  That is literally how the availability of the MSR specified by Intel.

Any code which doesn't follow those rules will find #GP even on some
recent CPUs.

> That seems to be slightly different from how we currently handle reads
> to FEATURE_CONTROL, since it will never fault on Intel (or compliant),
> even if we just return with bit 0 set alone. The new approach seems
> closer to what real hardware would do.

Don't fall into the trap of assuming that the current MSR behaviour is
perfect, and something we wish to preserve. :)

~Andrew


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-09-03 14:06       ` Andrew Cooper
@ 2020-09-03 14:10         ` Roger Pau Monné
  0 siblings, 0 replies; 28+ messages in thread
From: Roger Pau Monné @ 2020-09-03 14:10 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Jun Nakajima, Kevin Tian, Jan Beulich, Wei Liu

On Thu, Sep 03, 2020 at 03:06:38PM +0100, Andrew Cooper wrote:
> On 03/09/2020 14:33, Roger Pau Monné wrote:
> > On Wed, Sep 02, 2020 at 09:56:48PM +0100, Andrew Cooper wrote:
> >> On 01/09/2020 11:54, Roger Pau Monne wrote:
> >>> Linux PV guests will attempt to read the FEATURE_CONTROL MSR, so move
> >>> the handling done in VMX code into guest_rdmsr as it can be shared
> >>> between PV and HVM guests that way.
> >>>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>> ---
> >>> Changes from v1:
> >>>  - Move the VMX implementation into guest_rdmsr.
> >>> ---
> >>>  xen/arch/x86/hvm/vmx/vmx.c |  8 +-------
> >>>  xen/arch/x86/msr.c         | 13 +++++++++++++
> >>>  2 files changed, 14 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >>> index 4717e50d4a..f6657af923 100644
> >>> --- a/xen/arch/x86/hvm/vmx/vmx.c
> >>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >>> @@ -2980,13 +2980,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >>>      case MSR_IA32_DEBUGCTLMSR:
> >>>          __vmread(GUEST_IA32_DEBUGCTL, msr_content);
> >>>          break;
> >>> -    case MSR_IA32_FEATURE_CONTROL:
> >>> -        *msr_content = IA32_FEATURE_CONTROL_LOCK;
> >>> -        if ( vmce_has_lmce(curr) )
> >>> -            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
> >>> -        if ( nestedhvm_enabled(curr->domain) )
> >>> -            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> >>> -        break;
> >>> +
> >>>      case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
> >>>          if ( !nvmx_msr_read_intercept(msr, msr_content) )
> >>>              goto gp_fault;
> >>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> >>> index e84107ac7b..cc2f111a90 100644
> >>> --- a/xen/arch/x86/msr.c
> >>> +++ b/xen/arch/x86/msr.c
> >>> @@ -25,6 +25,7 @@
> >>>  #include <xen/sched.h>
> >>>  
> >>>  #include <asm/debugreg.h>
> >>> +#include <asm/hvm/nestedhvm.h>
> >>>  #include <asm/hvm/viridian.h>
> >>>  #include <asm/msr.h>
> >>>  #include <asm/setup.h>
> >>> @@ -197,6 +198,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >>>          /* Not offered to guests. */
> >>>          goto gp_fault;
> >>>  
> >>> +    case MSR_IA32_FEATURE_CONTROL:
> >>> +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
> >>> +            goto gp_fault;
> >> The MSR is available if:
> >>
> >> "If any one enumeration
> >> condition for defined bit
> >> field position greater than
> >> bit 0 holds."
> >>
> >> which for us means cp->basic.vmx || cp->feat.lmce at the moment, with
> >> perhaps some smx/sgx in the future.
> > I don't think there's a lmce cpu bit (seems to be signaled in
> > IA32_MCG_CAP[27] = 1), maybe I should just use vmce_has_lmce?
> >
> > if ( !cp->basic.vmx || !vmce_has_lmce(v) )
> >     goto gp_fault;
> 
> Ah yes, sorry.
> 
> Eventually that will be mp->mcg_cap.lmce, but use the predicate for now.
> 
> > Is it fine however to return a #GP if we don't provide any of those
> > features to guests, but the underlying CPU does support them?
> 
> Yes.  That is literally how the availability of the MSR specified by Intel.
> 
> Any code which doesn't follow those rules will find #GP even on some
> recent CPUs.
> 
> > That seems to be slightly different from how we currently handle reads
> > to FEATURE_CONTROL, since it will never fault on Intel (or compliant),
> > even if we just return with bit 0 set alone. The new approach seems
> > closer to what real hardware would do.
> 
> Don't fall into the trap of assuming that the current MSR behaviour is
> perfect, and something we wish to preserve. :)

Sure, I've pointed out the changes on the commit message, since the
behavior will be different now.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR
  2020-09-01 10:54 ` [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
@ 2020-09-04  8:34   ` Jan Beulich
  2020-09-07  3:25   ` Tian, Kevin
  1 sibling, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2020-09-04  8:34 UTC (permalink / raw)
  To: Jun Nakajima, Kevin Tian
  Cc: Roger Pau Monne, xen-devel, Andrew Cooper, Wei Liu

On 01.09.2020 12:54, Roger Pau Monne wrote:
> Such handling consist in checking that no bits have been changed from
> the read value, if that's the case silently drop the write, otherwise
> inject a fault.
> 
> At least Windows guests will expect to write to the MISC_ENABLE MSR
> with the same value that's been read from it.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)

Btw - early next week I think I'll time out waiting for a VMX side
ack for this change.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-09-01 10:54 ` [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
@ 2020-09-04  8:36   ` Jan Beulich
  2020-09-04  9:47     ` Andrew Cooper
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2020-09-04  8:36 UTC (permalink / raw)
  To: Roger Pau Monne, Andrew Cooper; +Cc: xen-devel, Wei Liu

On 01.09.2020 12:54, Roger Pau Monne wrote:
> The SYSCFG, TOP_MEM1 and TOP_MEM2 MSRs are currently exposed to guests
> and writes are silently discarded. Make this explicit in the SVM code
> now, and just return default constant values when attempting to read
> any of the MSRs, while continuing to silently drop writes.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1917,6 +1917,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>              goto gpf;
>          break;
>  
> +    case MSR_K8_SYSCFG:
> +    case MSR_K8_TOP_MEM1:
> +    case MSR_K8_TOP_MEM2:
>      case MSR_K8_VM_CR:
>          *msr_content = 0;
>          break;

Andrew, since you did suggest otherwise before, may I ask for an
explicit statement of yours here, be it in ack/nak form or something
less formal?

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases
  2020-09-03  8:15     ` Roger Pau Monné
@ 2020-09-04  8:39       ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2020-09-04  8:39 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Andrew Cooper, xen-devel, Wei Liu

On 03.09.2020 10:15, Roger Pau Monné wrote:
> On Wed, Sep 02, 2020 at 10:02:33PM +0100, Andrew Cooper wrote:
>> On 01/09/2020 11:54, Roger Pau Monne wrote:
>>> @@ -1942,19 +1966,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>>      default:
>>>          if ( rdmsr_safe(msr, *msr_content) == 0 )
>>>              break;
>>> -
>>> -        if ( boot_cpu_data.x86 == 0xf && msr == MSR_F10_BU_CFG )
>>> -        {
>>> -            /* Win2k8 x64 reads this MSR on revF chips, where it
>>> -             * wasn't publically available; it uses a magic constant
>>> -             * in %rdi as a password, which we don't have in
>>> -             * rdmsr_safe().  Since we'll ignore the later writes,
>>> -             * just use a plausible value here (the reset value from
>>> -             * rev10h chips) if the real CPU didn't provide one. */
>>> -            *msr_content = 0x0000000010200020ull;
>>> -            break;
>>> -        }
>>> -
>>>          goto gpf;
>>>      }
>>>  
>>> @@ -2110,6 +2121,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>>          nsvm->ns_msr_hsavepa = msr_content;
>>>          break;
>>>  
>>> +    case MSR_F10_BU_CFG:
>>> +    case MSR_F10_BU_CFG2:
>>> +        if ( rdmsr_safe(msr, msr_content) )
>>> +            goto gpf;
>>
>> The comment you've moved depends on this not having this behaviour, as
>> you'll now hand #GP back to Win2k8 on its write.
> 
> Oh, unless I'm very confused the comment was already wrong, since
> svm_msr_write_intercept previous to this patch would return a #GP when
> trying to write to BU_CFG if the underlying MSR is not readable, so
> this just keeps the previous behavior.
> 
> Looking at the original commit (338db98dd8d) it seems the intention
> was to only handle reads and let writes return a #GP?

I agree. And hence while moving it I think you want to also adjust
that comment.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-09-01 10:54 ` [PATCH v3 7/8] x86/hvm: Disallow " Roger Pau Monne
@ 2020-09-04  8:53   ` Jan Beulich
  2020-09-04  9:44     ` Andrew Cooper
  2020-09-04 11:13     ` Roger Pau Monné
  2020-09-07  3:31   ` Tian, Kevin
  1 sibling, 2 replies; 28+ messages in thread
From: Jan Beulich @ 2020-09-04  8:53 UTC (permalink / raw)
  To: Roger Pau Monne, Andrew Cooper
  Cc: xen-devel, Wei Liu, Jun Nakajima, Kevin Tian

On 01.09.2020 12:54, Roger Pau Monne wrote:
> @@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
>          break;
>  
> -    case MSR_IA32_FEATURE_CONTROL:
> -    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
> -        /* None of these MSRs are writeable. */
> -        goto gp_fault;

I understand Andrew did ask for this (and I didn't look closely
when I saw the comment), but ...

> @@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>               is_last_branch_msr(msr) )
>              break;
>  
> -        /* Match up with the RDMSR side; ultimately this should go away. */
> -        if ( rdmsr_safe(msr, msr_content) == 0 )
> -            break;
> -
> +        gdprintk(XENLOG_WARNING,
> +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> +                 msr, msr_content);
>          goto gp_fault;

... above from here is logic that handling of these MSRs now goes
through. I'm particularly worried about vmx_write_guest_msr(),
which blindly updates the value of any MSR it can find, i.e. if
any r/o MSR (from the set above, or even more generally) ever got
added to this vCPU-specific set, the r/o-ness would no longer be
maintained. Do we perhaps need to white-list MSRs for which
vmx_write_guest_msr() may get called here?

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-09-04  8:53   ` Jan Beulich
@ 2020-09-04  9:44     ` Andrew Cooper
  2020-09-04  9:58       ` Jan Beulich
  2020-09-04 11:13     ` Roger Pau Monné
  1 sibling, 1 reply; 28+ messages in thread
From: Andrew Cooper @ 2020-09-04  9:44 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monne; +Cc: xen-devel, Wei Liu, Jun Nakajima, Kevin Tian

On 04/09/2020 09:53, Jan Beulich wrote:
> On 01.09.2020 12:54, Roger Pau Monne wrote:
>> @@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>          __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
>>          break;
>>  
>> -    case MSR_IA32_FEATURE_CONTROL:
>> -    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
>> -        /* None of these MSRs are writeable. */
>> -        goto gp_fault;
> I understand Andrew did ask for this (and I didn't look closely
> when I saw the comment), but ...
>
>> @@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>               is_last_branch_msr(msr) )
>>              break;
>>  
>> -        /* Match up with the RDMSR side; ultimately this should go away. */
>> -        if ( rdmsr_safe(msr, msr_content) == 0 )
>> -            break;
>> -
>> +        gdprintk(XENLOG_WARNING,
>> +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
>> +                 msr, msr_content);
>>          goto gp_fault;
> ... above from here is logic that handling of these MSRs now goes
> through. I'm particularly worried about vmx_write_guest_msr(),
> which blindly updates the value of any MSR it can find, i.e. if
> any r/o MSR (from the set above, or even more generally) ever got
> added to this vCPU-specific set, the r/o-ness would no longer be
> maintained. Do we perhaps need to white-list MSRs for which
> vmx_write_guest_msr() may get called here?

If a read-only MSR ever actually gets into the load/save list, we'll
take a VMEntry failure (guest load) or SHUTDOWN (host load) as a
consequence of microcode takes a #GP fault.

However, restricting the content of this catch-all clause to nothing
(but the printk) is something I have planned as part of the ongoing MSR
cleanup work.

~Andrew


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-09-04  8:36   ` Jan Beulich
@ 2020-09-04  9:47     ` Andrew Cooper
  0 siblings, 0 replies; 28+ messages in thread
From: Andrew Cooper @ 2020-09-04  9:47 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monne; +Cc: xen-devel, Wei Liu

On 04/09/2020 09:36, Jan Beulich wrote:
> On 01.09.2020 12:54, Roger Pau Monne wrote:
>> The SYSCFG, TOP_MEM1 and TOP_MEM2 MSRs are currently exposed to guests
>> and writes are silently discarded. Make this explicit in the SVM code
>> now, and just return default constant values when attempting to read
>> any of the MSRs, while continuing to silently drop writes.
>>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -1917,6 +1917,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>              goto gpf;
>>          break;
>>  
>> +    case MSR_K8_SYSCFG:
>> +    case MSR_K8_TOP_MEM1:
>> +    case MSR_K8_TOP_MEM2:
>>      case MSR_K8_VM_CR:
>>          *msr_content = 0;
>>          break;
> Andrew, since you did suggest otherwise before, may I ask for an
> explicit statement of yours here, be it in ack/nak form or something
> less formal?

I'm not entirely convinced it is a safe thing to do, but lets see what
happens.

There is likely to be a bug tail from the flipping the default behaviour
at the end of the series, and this is certainly a simpler set of logic
than the alternative.

~Andrew


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-09-04  9:44     ` Andrew Cooper
@ 2020-09-04  9:58       ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2020-09-04  9:58 UTC (permalink / raw)
  To: Andrew Cooper, Roger Pau Monne
  Cc: xen-devel, Wei Liu, Jun Nakajima, Kevin Tian

On 04.09.2020 11:44, Andrew Cooper wrote:
> On 04/09/2020 09:53, Jan Beulich wrote:
>> On 01.09.2020 12:54, Roger Pau Monne wrote:
>>> @@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>>          __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
>>>          break;
>>>  
>>> -    case MSR_IA32_FEATURE_CONTROL:
>>> -    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
>>> -        /* None of these MSRs are writeable. */
>>> -        goto gp_fault;
>> I understand Andrew did ask for this (and I didn't look closely
>> when I saw the comment), but ...
>>
>>> @@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>>               is_last_branch_msr(msr) )
>>>              break;
>>>  
>>> -        /* Match up with the RDMSR side; ultimately this should go away. */
>>> -        if ( rdmsr_safe(msr, msr_content) == 0 )
>>> -            break;
>>> -
>>> +        gdprintk(XENLOG_WARNING,
>>> +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
>>> +                 msr, msr_content);
>>>          goto gp_fault;
>> ... above from here is logic that handling of these MSRs now goes
>> through. I'm particularly worried about vmx_write_guest_msr(),
>> which blindly updates the value of any MSR it can find, i.e. if
>> any r/o MSR (from the set above, or even more generally) ever got
>> added to this vCPU-specific set, the r/o-ness would no longer be
>> maintained. Do we perhaps need to white-list MSRs for which
>> vmx_write_guest_msr() may get called here?
> 
> If a read-only MSR ever actually gets into the load/save list, we'll
> take a VMEntry failure (guest load) or SHUTDOWN (host load) as a
> consequence of microcode takes a #GP fault.

Oh, of course. In order to surface a value different from the hardware's
one has to intercept such MSRs.

> However, restricting the content of this catch-all clause to nothing
> (but the printk) is something I have planned as part of the ongoing MSR
> cleanup work.

Good to know.

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-09-04  8:53   ` Jan Beulich
  2020-09-04  9:44     ` Andrew Cooper
@ 2020-09-04 11:13     ` Roger Pau Monné
  1 sibling, 0 replies; 28+ messages in thread
From: Roger Pau Monné @ 2020-09-04 11:13 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, xen-devel, Wei Liu, Jun Nakajima, Kevin Tian

On Fri, Sep 04, 2020 at 10:53:26AM +0200, Jan Beulich wrote:
> On 01.09.2020 12:54, Roger Pau Monne wrote:
> > @@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >          __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
> >          break;
> >  
> > -    case MSR_IA32_FEATURE_CONTROL:
> > -    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
> > -        /* None of these MSRs are writeable. */
> > -        goto gp_fault;
> 
> I understand Andrew did ask for this (and I didn't look closely
> when I saw the comment), but ...
> 
> > @@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >               is_last_branch_msr(msr) )
> >              break;
> >  
> > -        /* Match up with the RDMSR side; ultimately this should go away. */
> > -        if ( rdmsr_safe(msr, msr_content) == 0 )
> > -            break;
> > -
> > +        gdprintk(XENLOG_WARNING,
> > +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> > +                 msr, msr_content);
> >          goto gp_fault;
> 
> ... above from here is logic that handling of these MSRs now goes
> through. I'm particularly worried about vmx_write_guest_msr(),
> which blindly updates the value of any MSR it can find, i.e. if
> any r/o MSR (from the set above, or even more generally) ever got
> added to this vCPU-specific set, the r/o-ness would no longer be
> maintained.

But those MSRs need to be added to the list explicitly (using
vmx_add_guest_msr) in order for the guest to be able to update them,
and they are supposed to be owned by the guest?

I understand the concern, but AFAICT none of the MSRs handled by
VMX_MSR_GUEST require such handling. Maybe it's worth adding something
like VMX_MSR_GUEST_RO in the future if such need arises?

> Do we perhaps need to white-list MSRs for which
> vmx_write_guest_msr() may get called here?

When such MSRs are added such addition should make sure they are not
allowed write permissions? You would have to do that now anyway
AFAICT.

Roger.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR
  2020-09-01 10:54 ` [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
  2020-09-04  8:34   ` Jan Beulich
@ 2020-09-07  3:25   ` Tian, Kevin
  2020-09-07  7:22     ` Jan Beulich
  1 sibling, 1 reply; 28+ messages in thread
From: Tian, Kevin @ 2020-09-07  3:25 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Nakajima, Jun, Jan Beulich, Andrew Cooper, Wei Liu

> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: Tuesday, September 1, 2020 6:55 PM
> 
> Such handling consist in checking that no bits have been changed from
> the read value, if that's the case silently drop the write, otherwise
> inject a fault.
> 
> At least Windows guests will expect to write to the MISC_ENABLE MSR
> with the same value that's been read from it.

for better readability could you also add this line to the code comment
below?

with that:

	Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index a0d58ffbe2..4717e50d4a 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -3163,7 +3163,7 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
> 
>      switch ( msr )
>      {
> -        uint64_t rsvd;
> +        uint64_t rsvd, tmp;
> 
>      case MSR_IA32_SYSENTER_CS:
>          __vmwrite(GUEST_SYSENTER_CS, msr_content);
> @@ -3301,6 +3301,13 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>          /* None of these MSRs are writeable. */
>          goto gp_fault;
> 
> +    case MSR_IA32_MISC_ENABLE:
> +        /* Silently drop writes that don't change the reported value. */
> +        if ( vmx_msr_read_intercept(msr, &tmp) != X86EMUL_OKAY ||
> +             tmp != msr_content )
> +            goto gp_fault;
> +        break;
> +
>      case MSR_P6_PERFCTR(0)...MSR_P6_PERFCTR(7):
>      case MSR_P6_EVNTSEL(0)...MSR_P6_EVNTSEL(7):
>      case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
> --
> 2.28.0


^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [PATCH v3 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-09-01 10:54 ` [PATCH v3 7/8] x86/hvm: Disallow " Roger Pau Monne
  2020-09-04  8:53   ` Jan Beulich
@ 2020-09-07  3:31   ` Tian, Kevin
  1 sibling, 0 replies; 28+ messages in thread
From: Tian, Kevin @ 2020-09-07  3:31 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Andrew Cooper, Jan Beulich, Wei Liu, Nakajima, Jun

> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: Tuesday, September 1, 2020 6:55 PM
> 
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Change the catch-all behavior for MSR not explicitly handled. Instead
> of allow full read-access to the MSR space and silently dropping
> writes return an exception when the MSR is not explicitly handled.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> [remove rdmsr_safe from default case in svm_msr_read_intercept]
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
> Changes since v1:
>  - Fold chunk to remove explicit write handling of VMX MSRs just to
>    #GP.
>  - Remove catch-all rdmsr_safe in svm_msr_read_intercept.
> ---
>  xen/arch/x86/hvm/svm/svm.c | 10 ++++------
>  xen/arch/x86/hvm/vmx/vmx.c | 16 ++++------------
>  2 files changed, 8 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 0e43154c7e..66b22efdab 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1964,8 +1964,7 @@ static int svm_msr_read_intercept(unsigned int
> msr, uint64_t *msr_content)
>          break;
> 
>      default:
> -        if ( rdmsr_safe(msr, *msr_content) == 0 )
> -            break;
> +        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n",
> msr);
>          goto gpf;
>      }
> 
> @@ -2150,10 +2149,9 @@ static int svm_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>          break;
> 
>      default:
> -        /* Match up with the RDMSR side; ultimately this should go away. */
> -        if ( rdmsr_safe(msr, msr_content) == 0 )
> -            break;
> -
> +        gdprintk(XENLOG_WARNING,
> +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> +                 msr, msr_content);
>          goto gpf;
>      }
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index f6657af923..9cc9d81c41 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -3015,9 +3015,7 @@ static int vmx_msr_read_intercept(unsigned int
> msr, uint64_t *msr_content)
>              break;
>          }
> 
> -        if ( rdmsr_safe(msr, *msr_content) == 0 )
> -            break;
> -
> +        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n",
> msr);
>          goto gp_fault;
>      }
> 
> @@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>          __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
>          break;
> 
> -    case MSR_IA32_FEATURE_CONTROL:
> -    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
> -        /* None of these MSRs are writeable. */
> -        goto gp_fault;
> -
>      case MSR_IA32_MISC_ENABLE:
>          /* Silently drop writes that don't change the reported value. */
>          if ( vmx_msr_read_intercept(msr, &tmp) != X86EMUL_OKAY ||
> @@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>               is_last_branch_msr(msr) )
>              break;
> 
> -        /* Match up with the RDMSR side; ultimately this should go away. */
> -        if ( rdmsr_safe(msr, msr_content) == 0 )
> -            break;
> -
> +        gdprintk(XENLOG_WARNING,
> +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> +                 msr, msr_content);
>          goto gp_fault;
>      }
> 
> --
> 2.28.0


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR
  2020-09-07  3:25   ` Tian, Kevin
@ 2020-09-07  7:22     ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2020-09-07  7:22 UTC (permalink / raw)
  To: Tian, Kevin, Roger Pau Monne
  Cc: xen-devel, Nakajima, Jun, Andrew Cooper, Wei Liu

On 07.09.2020 05:25, Tian, Kevin wrote:
>> From: Roger Pau Monne <roger.pau@citrix.com>
>> Sent: Tuesday, September 1, 2020 6:55 PM
>>
>> Such handling consist in checking that no bits have been changed from
>> the read value, if that's the case silently drop the write, otherwise
>> inject a fault.
>>
>> At least Windows guests will expect to write to the MISC_ENABLE MSR
>> with the same value that's been read from it.
> 
> for better readability could you also add this line to the code comment
> below?
> 
> with that:
> 
> 	Reviewed-by: Kevin Tian <kevin.tian@intel.com>

I'll fold this in while committing.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-09-07  7:22 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-01 10:54 [PATCH v3 0/8] x86: switch default MSR behavior Roger Pau Monne
2020-09-01 10:54 ` [PATCH v3 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
2020-09-04  8:34   ` Jan Beulich
2020-09-07  3:25   ` Tian, Kevin
2020-09-07  7:22     ` Jan Beulich
2020-09-01 10:54 ` [PATCH v3 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
2020-09-04  8:36   ` Jan Beulich
2020-09-04  9:47     ` Andrew Cooper
2020-09-01 10:54 ` [PATCH v3 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
2020-09-02 20:49   ` Andrew Cooper
2020-09-01 10:54 ` [PATCH v3 4/8] x86/svm: handle BU_CFG and BU_CFG2 with cases Roger Pau Monne
2020-09-02 21:02   ` Andrew Cooper
2020-09-03  8:15     ` Roger Pau Monné
2020-09-04  8:39       ` Jan Beulich
2020-09-03  8:29     ` Jan Beulich
2020-09-01 10:54 ` [PATCH v3 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
2020-09-02 20:56   ` Andrew Cooper
2020-09-03 13:33     ` Roger Pau Monné
2020-09-03 14:06       ` Andrew Cooper
2020-09-03 14:10         ` Roger Pau Monné
2020-09-01 10:54 ` [PATCH v3 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
2020-09-01 10:54 ` [PATCH v3 7/8] x86/hvm: Disallow " Roger Pau Monne
2020-09-04  8:53   ` Jan Beulich
2020-09-04  9:44     ` Andrew Cooper
2020-09-04  9:58       ` Jan Beulich
2020-09-04 11:13     ` Roger Pau Monné
2020-09-07  3:31   ` Tian, Kevin
2020-09-01 10:54 ` [PATCH v3 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.