All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] x86: switch default MSR behavior
@ 2020-08-20 15:08 Roger Pau Monne
  2020-08-20 15:08 ` [PATCH v2 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
                   ` (7 more replies)
  0 siblings, 8 replies; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jun Nakajima, Kevin Tian, Jan Beulich,
	Andrew Cooper, Wei Liu

Hello,

The current series attempts to change the current MSR default handling
behavior, which is to silently drop writes to writable MSRs, and allow
reading any MSR not explicitly handled.

After this series access to MSRs not explicitly handled will trigger a
#GP fault. I've tested this series with osstest and it doesn't introduce
any regression, at least on the boxes selected for testing:

http://logs.test-lab.xenproject.org/osstest/logs/152630/

Thanks, Roger.

Andrew Cooper (2):
  x86/hvm: Disallow access to unknown MSRs
  x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr()

Roger Pau Monne (6):
  x86/vmx: handle writes to MISC_ENABLE MSR
  x86/svm: silently drop writes to SYSCFG and related MSRs
  x86/msr: explicitly handle AMD DE_CFG
  x86/svm: drop writes to BU_CFG on revF chips
  x86/pv: allow reading FEATURE_CONTROL MSR
  x86/pv: disallow access to unknown MSRs

 xen/arch/x86/hvm/svm/svm.c     | 38 ++++++++++++++----
 xen/arch/x86/hvm/vmx/vmx.c     | 31 ++++++---------
 xen/arch/x86/msr.c             | 71 +++++++++++++---------------------
 xen/arch/x86/pv/emul-priv-op.c | 18 +++++----
 4 files changed, 79 insertions(+), 79 deletions(-)

-- 
2.28.0



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 1/8] x86/vmx: handle writes to MISC_ENABLE MSR
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-20 15:08 ` [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jun Nakajima, Kevin Tian, Jan Beulich,
	Andrew Cooper, Wei Liu

Such handling consist in checking that no bits have been changed from
the read value, if that's the case silently drop the write, otherwise
inject a fault.

At least Windows guests will expect to write to the MISC_ENABLE MSR
with the same value that's been read from it.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a0d58ffbe2..4717e50d4a 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3163,7 +3163,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 
     switch ( msr )
     {
-        uint64_t rsvd;
+        uint64_t rsvd, tmp;
 
     case MSR_IA32_SYSENTER_CS:
         __vmwrite(GUEST_SYSENTER_CS, msr_content);
@@ -3301,6 +3301,13 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         /* None of these MSRs are writeable. */
         goto gp_fault;
 
+    case MSR_IA32_MISC_ENABLE:
+        /* Silently drop writes that don't change the reported value. */
+        if ( vmx_msr_read_intercept(msr, &tmp) != X86EMUL_OKAY ||
+             tmp != msr_content )
+            goto gp_fault;
+        break;
+
     case MSR_P6_PERFCTR(0)...MSR_P6_PERFCTR(7):
     case MSR_P6_EVNTSEL(0)...MSR_P6_EVNTSEL(7):
     case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
  2020-08-20 15:08 ` [PATCH v2 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-27 15:03   ` Jan Beulich
  2020-08-20 15:08 ` [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

The SYSCFG, TOP_MEM1 and TOP_MEM2 MSRs are currently exposed to guests
and writes are silently discarded. Make this explicit in the SVM code
now, and just return default constant values when attempting to read
any of the MSRs, while continuing to silently drop writes.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes sincxe v1:
 - Return MtrrFixDramEn in MSR_K8_SYSCFG.
---
 xen/arch/x86/hvm/svm/svm.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index ca3bbfcbb3..2d0823e7e1 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1917,6 +1917,21 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             goto gpf;
         break;
 
+    case MSR_K8_TOP_MEM1:
+    case MSR_K8_TOP_MEM2:
+        *msr_content = 0;
+        break;
+
+    case MSR_K8_SYSCFG:
+        /*
+         * Return MtrrFixDramEn: albeit the current emulated MTRR
+         * implementation doesn't support the Extended Type-Field Format having
+         * such bit set is common on AMD hardware and is harmless as long as
+         * MtrrFixDramModEn isn't set.
+         */
+        *msr_content = K8_MTRRFIXRANGE_DRAM_ENABLE;
+        break;
+
     case MSR_K8_VM_CR:
         *msr_content = 0;
         break;
@@ -2094,6 +2109,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
             goto gpf;
         break;
 
+    case MSR_K8_TOP_MEM1:
+    case MSR_K8_TOP_MEM2:
+    case MSR_K8_SYSCFG:
+        /* Drop writes. */
+        break;
+
     case MSR_K8_VM_CR:
         /* ignore write. handle all bits as read-only. */
         break;
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
  2020-08-20 15:08 ` [PATCH v2 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
  2020-08-20 15:08 ` [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-20 17:08   ` Andrew Cooper
  2020-08-20 15:08 ` [PATCH v2 4/8] x86/svm: drop writes to BU_CFG on revF chips Roger Pau Monne
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Report the hardware value of DE_CFG on AMD hardware and silently drop
writes.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - New in this version.
---
 xen/arch/x86/msr.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index ca4307e19f..a890cb9976 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -274,6 +274,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = msrs->tsc_aux;
         break;
 
+    case MSR_AMD64_DE_CFG:
+        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
+             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD |
+                                           X86_VENDOR_HYGON)) ||
+             rdmsr_safe(MSR_AMD64_DE_CFG, *val) )
+            goto gp_fault;
+        break;
+
     case MSR_AMD64_DR0_ADDRESS_MASK:
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !cp->extd.dbext )
@@ -499,6 +507,12 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsr_tsc_aux(val);
         break;
 
+    case MSR_AMD64_DE_CFG:
+        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
+             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+            goto gp_fault;
+        break;
+
     case MSR_AMD64_DR0_ADDRESS_MASK:
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !cp->extd.dbext || val != (uint32_t)val )
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 4/8] x86/svm: drop writes to BU_CFG on revF chips
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (2 preceding siblings ...)
  2020-08-20 15:08 ` [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-27 15:42   ` Jan Beulich
  2020-08-20 15:08 ` [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

We already have special casing to handle reads of this MSR for revF
chips, so do as the comment in svm_msr_read_intercept says and drop
writes. This is in preparation for changing the default MSR write
behavior, which will instead return #GP on not explicitly handled
writes.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - New in this version.
---
 xen/arch/x86/hvm/svm/svm.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2d0823e7e1..7586b77268 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2125,6 +2125,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         nsvm->ns_msr_hsavepa = msr_content;
         break;
 
+    case MSR_F10_BU_CFG:
+        /* See comment in svm_msr_read_intercept. */
+        if ( boot_cpu_data.x86 != 0xf )
+            goto gpf;
+        break;
+
     case MSR_AMD64_TSC_RATIO:
         if ( msr_content & TSC_RATIO_RSVD_BITS )
             goto gpf;
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (3 preceding siblings ...)
  2020-08-20 15:08 ` [PATCH v2 4/8] x86/svm: drop writes to BU_CFG on revF chips Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-27 15:53   ` Jan Beulich
  2020-08-20 15:08 ` [PATCH v2 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jun Nakajima, Kevin Tian, Jan Beulich,
	Andrew Cooper, Wei Liu

Linux PV guests will attempt to read the FEATURE_CONTROL MSR, so move
the handling done in VMX code into guest_rdmsr as it can be shared
between PV and HVM guests that way.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes from v1:
 - Move the VMX implementation into guest_rdmsr.
---
 xen/arch/x86/hvm/vmx/vmx.c |  8 +-------
 xen/arch/x86/msr.c         | 13 +++++++++++++
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4717e50d4a..f6657af923 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2980,13 +2980,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_IA32_DEBUGCTLMSR:
         __vmread(GUEST_IA32_DEBUGCTL, msr_content);
         break;
-    case MSR_IA32_FEATURE_CONTROL:
-        *msr_content = IA32_FEATURE_CONTROL_LOCK;
-        if ( vmce_has_lmce(curr) )
-            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
-        if ( nestedhvm_enabled(curr->domain) )
-            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
-        break;
+
     case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
         if ( !nvmx_msr_read_intercept(msr, msr_content) )
             goto gp_fault;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index a890cb9976..bb0dd5ff0a 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -25,6 +25,7 @@
 #include <xen/sched.h>
 
 #include <asm/debugreg.h>
+#include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/viridian.h>
 #include <asm/msr.h>
 #include <asm/setup.h>
@@ -181,6 +182,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         /* Not offered to guests. */
         goto gp_fault;
 
+    case MSR_IA32_FEATURE_CONTROL:
+        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
+            goto gp_fault;
+
+        *val = IA32_FEATURE_CONTROL_LOCK;
+        if ( vmce_has_lmce(v) )
+            *val |= IA32_FEATURE_CONTROL_LMCE_ON;
+        if ( nestedhvm_enabled(d) )
+            *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
+        break;
+
+
     case MSR_IA32_PLATFORM_ID:
         if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
              !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 6/8] x86/pv: disallow access to unknown MSRs
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (4 preceding siblings ...)
  2020-08-20 15:08 ` [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-28  8:45   ` Jan Beulich
  2020-08-20 15:08 ` [PATCH v2 7/8] x86/hvm: Disallow " Roger Pau Monne
  2020-08-20 15:08 ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

Change the catch-all behavior for MSR not explicitly handled. Instead
of allow full read-access to the MSR space and silently dropping
writes return an exception when the MSR is not explicitly handled.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/pv/emul-priv-op.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index bcc1188f6a..d4735b4f06 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -972,9 +972,10 @@ static int read_msr(unsigned int reg, uint64_t *val,
         }
         /* fall through */
     default:
+        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", reg);
+        break;
+
     normal:
-        /* Everyone can read the MSR space. */
-        /* gdprintk(XENLOG_WARNING, "Domain attempted RDMSR %08x\n", reg); */
         if ( rdmsr_safe(reg, *val) )
             break;
         return X86EMUL_OKAY;
@@ -1141,14 +1142,15 @@ static int write_msr(unsigned int reg, uint64_t val,
         }
         /* fall through */
     default:
-        if ( rdmsr_safe(reg, temp) )
-            break;
+        gdprintk(XENLOG_WARNING,
+                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
+                 reg, val);
+        break;
 
-        if ( val != temp )
     invalid:
-            gdprintk(XENLOG_WARNING,
-                     "Domain attempted WRMSR %08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
-                     reg, temp, val);
+        gdprintk(XENLOG_WARNING,
+                 "Domain attempted WRMSR %08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
+                 reg, temp, val);
         return X86EMUL_OKAY;
     }
 
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (5 preceding siblings ...)
  2020-08-20 15:08 ` [PATCH v2 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-28  8:51   ` Jan Beulich
  2020-08-20 15:08 ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Jan Beulich, Wei Liu, Roger Pau Monné,
	Jun Nakajima, Kevin Tian

From: Andrew Cooper <andrew.cooper3@citrix.com>

Change the catch-all behavior for MSR not explicitly handled. Instead
of allow full read-access to the MSR space and silently dropping
writes return an exception when the MSR is not explicitly handled.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
[remove rdmsr_safe from default case in svm_msr_read_intercept]
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Fold chunk to remove explicit write handling of VMX MSRs just to
   #GP.
 - Remove catch-all rdmsr_safe in svm_msr_read_intercept.
---
 xen/arch/x86/hvm/svm/svm.c | 11 ++++-------
 xen/arch/x86/hvm/vmx/vmx.c | 16 ++++------------
 2 files changed, 8 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 7586b77268..1e4458c184 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1952,9 +1952,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         break;
 
     default:
-        if ( rdmsr_safe(msr, *msr_content) == 0 )
-            break;
-
         if ( boot_cpu_data.x86 == 0xf && msr == MSR_F10_BU_CFG )
         {
             /* Win2k8 x64 reads this MSR on revF chips, where it
@@ -1967,6 +1964,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             break;
         }
 
+        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
         goto gpf;
     }
 
@@ -2154,10 +2152,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         break;
 
     default:
-        /* Match up with the RDMSR side; ultimately this should go away. */
-        if ( rdmsr_safe(msr, msr_content) == 0 )
-            break;
-
+        gdprintk(XENLOG_WARNING,
+                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
+                 msr, msr_content);
         goto gpf;
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6657af923..9cc9d81c41 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3015,9 +3015,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             break;
         }
 
-        if ( rdmsr_safe(msr, *msr_content) == 0 )
-            break;
-
+        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", msr);
         goto gp_fault;
     }
 
@@ -3290,11 +3288,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
         break;
 
-    case MSR_IA32_FEATURE_CONTROL:
-    case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
-        /* None of these MSRs are writeable. */
-        goto gp_fault;
-
     case MSR_IA32_MISC_ENABLE:
         /* Silently drop writes that don't change the reported value. */
         if ( vmx_msr_read_intercept(msr, &tmp) != X86EMUL_OKAY ||
@@ -3320,10 +3313,9 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
              is_last_branch_msr(msr) )
             break;
 
-        /* Match up with the RDMSR side; ultimately this should go away. */
-        if ( rdmsr_safe(msr, msr_content) == 0 )
-            break;
-
+        gdprintk(XENLOG_WARNING,
+                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
+                 msr, msr_content);
         goto gp_fault;
     }
 
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr()
  2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
                   ` (6 preceding siblings ...)
  2020-08-20 15:08 ` [PATCH v2 7/8] x86/hvm: Disallow " Roger Pau Monne
@ 2020-08-20 15:08 ` Roger Pau Monne
  2020-08-28  8:55   ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr() Jan Beulich
  7 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monne @ 2020-08-20 15:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Jan Beulich, Wei Liu, Roger Pau Monné

From: Andrew Cooper <andrew.cooper3@citrix.com>

Now that the main PV/HVM MSR handlers raise #GP for all unknown MSRs, there is
no need to special case these MSRs any more.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - New in this version.
---
 xen/arch/x86/msr.c | 46 ----------------------------------------------
 1 file changed, 46 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index bb0dd5ff0a..560719c2aa 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -159,29 +159,6 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 
     switch ( msr )
     {
-    case MSR_AMD_PATCHLOADER:
-    case MSR_IA32_UCODE_WRITE:
-    case MSR_PRED_CMD:
-    case MSR_FLUSH_CMD:
-        /* Write-only */
-    case MSR_TEST_CTRL:
-    case MSR_CORE_CAPABILITIES:
-    case MSR_TSX_FORCE_ABORT:
-    case MSR_TSX_CTRL:
-    case MSR_MCU_OPT_CTRL:
-    case MSR_RTIT_OUTPUT_BASE ... MSR_RTIT_ADDR_B(7):
-    case MSR_U_CET:
-    case MSR_S_CET:
-    case MSR_PL0_SSP ... MSR_INTERRUPT_SSP_TABLE:
-    case MSR_AMD64_LWP_CFG:
-    case MSR_AMD64_LWP_CBADDR:
-    case MSR_PPIN_CTL:
-    case MSR_PPIN:
-    case MSR_AMD_PPIN_CTL:
-    case MSR_AMD_PPIN:
-        /* Not offered to guests. */
-        goto gp_fault;
-
     case MSR_IA32_FEATURE_CONTROL:
         if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
             goto gp_fault;
@@ -349,29 +326,6 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     {
         uint64_t rsvd;
 
-    case MSR_IA32_PLATFORM_ID:
-    case MSR_CORE_CAPABILITIES:
-    case MSR_INTEL_CORE_THREAD_COUNT:
-    case MSR_INTEL_PLATFORM_INFO:
-    case MSR_ARCH_CAPABILITIES:
-        /* Read-only */
-    case MSR_TEST_CTRL:
-    case MSR_TSX_FORCE_ABORT:
-    case MSR_TSX_CTRL:
-    case MSR_MCU_OPT_CTRL:
-    case MSR_RTIT_OUTPUT_BASE ... MSR_RTIT_ADDR_B(7):
-    case MSR_U_CET:
-    case MSR_S_CET:
-    case MSR_PL0_SSP ... MSR_INTERRUPT_SSP_TABLE:
-    case MSR_AMD64_LWP_CFG:
-    case MSR_AMD64_LWP_CBADDR:
-    case MSR_PPIN_CTL:
-    case MSR_PPIN:
-    case MSR_AMD_PPIN_CTL:
-    case MSR_AMD_PPIN:
-        /* Not offered to guests. */
-        goto gp_fault;
-
     case MSR_AMD_PATCHLEVEL:
         BUILD_BUG_ON(MSR_IA32_UCODE_REV != MSR_AMD_PATCHLEVEL);
         /*
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-08-20 15:08 ` [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
@ 2020-08-20 17:08   ` Andrew Cooper
  2020-08-21 11:52     ` Roger Pau Monné
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Cooper @ 2020-08-20 17:08 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel; +Cc: Jan Beulich, Wei Liu

On 20/08/2020 16:08, Roger Pau Monne wrote:

> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index ca4307e19f..a890cb9976 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -274,6 +274,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>          *val = msrs->tsc_aux;
>          break;
>  
> +    case MSR_AMD64_DE_CFG:
> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
> +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD |
> +                                           X86_VENDOR_HYGON)) ||
> +             rdmsr_safe(MSR_AMD64_DE_CFG, *val) )
> +            goto gp_fault;
> +        break;

Ah.  What I intended was to read just bit 2 and nothing else.

Leaking the full value is non-ideal from a migration point of view, and
in this case, you can avoid querying hardware entirely.

Just return AMD64_DE_CFG_LFENCE_SERIALISE here.  The only case where it
won't be true is when the hypervisor running us (i.e. Xen) failed to set
it up, and the CPU boot path failed to adjust it, at which point the
whole system has much bigger problems.

> +
>      case MSR_AMD64_DR0_ADDRESS_MASK:
>      case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
>          if ( !cp->extd.dbext )
> @@ -499,6 +507,12 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>              wrmsr_tsc_aux(val);
>          break;
>  
> +    case MSR_AMD64_DE_CFG:
> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
> +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +            goto gp_fault;
> +        break;

There should be no problem yielding #GP here (i.e. dropping this hunk).

IIRC, it was the behaviour of certain hypervisors when Spectre hit, so
all guests ought to cope.  (And indeed, not try to redundantly set the
bit to start with).

~Andrew

> +
>      case MSR_AMD64_DR0_ADDRESS_MASK:
>      case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
>          if ( !cp->extd.dbext || val != (uint32_t)val )



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-08-20 17:08   ` Andrew Cooper
@ 2020-08-21 11:52     ` Roger Pau Monné
  2020-08-21 14:03       ` Andrew Cooper
  0 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monné @ 2020-08-21 11:52 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Jan Beulich, Wei Liu

On Thu, Aug 20, 2020 at 06:08:53PM +0100, Andrew Cooper wrote:
> On 20/08/2020 16:08, Roger Pau Monne wrote:
> 
> > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> > index ca4307e19f..a890cb9976 100644
> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -274,6 +274,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >          *val = msrs->tsc_aux;
> >          break;
> >  
> > +    case MSR_AMD64_DE_CFG:
> > +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
> > +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD |
> > +                                           X86_VENDOR_HYGON)) ||
> > +             rdmsr_safe(MSR_AMD64_DE_CFG, *val) )
> > +            goto gp_fault;
> > +        break;
> 
> Ah.  What I intended was to read just bit 2 and nothing else.
> 
> Leaking the full value is non-ideal from a migration point of view, and
> in this case, you can avoid querying hardware entirely.
> 
> Just return AMD64_DE_CFG_LFENCE_SERIALISE here.  The only case where it
> won't be true is when the hypervisor running us (i.e. Xen) failed to set
> it up, and the CPU boot path failed to adjust it, at which point the
> whole system has much bigger problems.

Right, the rest are just model specific workarounds AFAICT, so it's
safe to not display them. A guest might attempt to set them, but we
should simply drop the write, see below.

> 
> > +
> >      case MSR_AMD64_DR0_ADDRESS_MASK:
> >      case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
> >          if ( !cp->extd.dbext )
> > @@ -499,6 +507,12 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> >              wrmsr_tsc_aux(val);
> >          break;
> >  
> > +    case MSR_AMD64_DE_CFG:
> > +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
> > +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> > +            goto gp_fault;
> > +        break;
> 
> There should be no problem yielding #GP here (i.e. dropping this hunk).
> 
> IIRC, it was the behaviour of certain hypervisors when Spectre hit, so
> all guests ought to cope.  (And indeed, not try to redundantly set the
> bit to start with).

It seems like OpenBSD will try to do so unconditionally, see:

https://www.illumos.org/issues/12998

According to the report there returning #GP when trying to WRMSR
DE_CFG will cause OpenBSD to panic, so I think we need to keep this
behavior of silently dropping writes.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-08-21 11:52     ` Roger Pau Monné
@ 2020-08-21 14:03       ` Andrew Cooper
  2020-08-21 14:09         ` Roger Pau Monné
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Cooper @ 2020-08-21 14:03 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Jan Beulich, Wei Liu

On 21/08/2020 12:52, Roger Pau Monné wrote:
> On Thu, Aug 20, 2020 at 06:08:53PM +0100, Andrew Cooper wrote:
>> On 20/08/2020 16:08, Roger Pau Monne wrote:
>>
>>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
>>> index ca4307e19f..a890cb9976 100644
>>> --- a/xen/arch/x86/msr.c
>>> +++ b/xen/arch/x86/msr.c
>>> @@ -274,6 +274,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>>          *val = msrs->tsc_aux;
>>>          break;
>>>  
>>> +    case MSR_AMD64_DE_CFG:
>>> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
>>> +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD |
>>> +                                           X86_VENDOR_HYGON)) ||
>>> +             rdmsr_safe(MSR_AMD64_DE_CFG, *val) )
>>> +            goto gp_fault;
>>> +        break;
>> Ah.  What I intended was to read just bit 2 and nothing else.
>>
>> Leaking the full value is non-ideal from a migration point of view, and
>> in this case, you can avoid querying hardware entirely.
>>
>> Just return AMD64_DE_CFG_LFENCE_SERIALISE here.  The only case where it
>> won't be true is when the hypervisor running us (i.e. Xen) failed to set
>> it up, and the CPU boot path failed to adjust it, at which point the
>> whole system has much bigger problems.
> Right, the rest are just model specific workarounds AFAICT, so it's
> safe to not display them. A guest might attempt to set them, but we
> should simply drop the write, see below.

Most of the layout is model specific.  It's only by chance that the
LFENCE bits line up in all generations.

The bit used to work around Speculative Store Bypass in LS_CFG doesn't
line up across generations.

>>> +
>>>      case MSR_AMD64_DR0_ADDRESS_MASK:
>>>      case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
>>>          if ( !cp->extd.dbext )
>>> @@ -499,6 +507,12 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>>              wrmsr_tsc_aux(val);
>>>          break;
>>>  
>>> +    case MSR_AMD64_DE_CFG:
>>> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
>>> +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
>>> +            goto gp_fault;
>>> +        break;
>> There should be no problem yielding #GP here (i.e. dropping this hunk).
>>
>> IIRC, it was the behaviour of certain hypervisors when Spectre hit, so
>> all guests ought to cope.  (And indeed, not try to redundantly set the
>> bit to start with).
> It seems like OpenBSD will try to do so unconditionally, see:
>
> https://www.illumos.org/issues/12998
>
> According to the report there returning #GP when trying to WRMSR
> DE_CFG will cause OpenBSD to panic, so I think we need to keep this
> behavior of silently dropping writes.

/sigh - there is always one.  Comment please, and lets leave it as
write-discard.

As for the vendor-ness, drop the checks to just cp->x86_vendor.  There
is no boot_cpu_data interaction how that you've taken the rdmsr() out.

~Andrew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG
  2020-08-21 14:03       ` Andrew Cooper
@ 2020-08-21 14:09         ` Roger Pau Monné
  0 siblings, 0 replies; 26+ messages in thread
From: Roger Pau Monné @ 2020-08-21 14:09 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Jan Beulich, Wei Liu

On Fri, Aug 21, 2020 at 03:03:08PM +0100, Andrew Cooper wrote:
> On 21/08/2020 12:52, Roger Pau Monné wrote:
> > On Thu, Aug 20, 2020 at 06:08:53PM +0100, Andrew Cooper wrote:
> >> On 20/08/2020 16:08, Roger Pau Monne wrote:
> >>
> >>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> >>> index ca4307e19f..a890cb9976 100644
> >>> --- a/xen/arch/x86/msr.c
> >>> +++ b/xen/arch/x86/msr.c
> >>> @@ -274,6 +274,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >>>          *val = msrs->tsc_aux;
> >>>          break;
> >>>  
> >>> +    case MSR_AMD64_DE_CFG:
> >>> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
> >>> +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD |
> >>> +                                           X86_VENDOR_HYGON)) ||
> >>> +             rdmsr_safe(MSR_AMD64_DE_CFG, *val) )
> >>> +            goto gp_fault;
> >>> +        break;
> >> Ah.  What I intended was to read just bit 2 and nothing else.
> >>
> >> Leaking the full value is non-ideal from a migration point of view, and
> >> in this case, you can avoid querying hardware entirely.
> >>
> >> Just return AMD64_DE_CFG_LFENCE_SERIALISE here.  The only case where it
> >> won't be true is when the hypervisor running us (i.e. Xen) failed to set
> >> it up, and the CPU boot path failed to adjust it, at which point the
> >> whole system has much bigger problems.
> > Right, the rest are just model specific workarounds AFAICT, so it's
> > safe to not display them. A guest might attempt to set them, but we
> > should simply drop the write, see below.
> 
> Most of the layout is model specific.  It's only by chance that the
> LFENCE bits line up in all generations.
> 
> The bit used to work around Speculative Store Bypass in LS_CFG doesn't
> line up across generations.
> 
> >>> +
> >>>      case MSR_AMD64_DR0_ADDRESS_MASK:
> >>>      case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
> >>>          if ( !cp->extd.dbext )
> >>> @@ -499,6 +507,12 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> >>>              wrmsr_tsc_aux(val);
> >>>          break;
> >>>  
> >>> +    case MSR_AMD64_DE_CFG:
> >>> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
> >>> +             !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> >>> +            goto gp_fault;
> >>> +        break;
> >> There should be no problem yielding #GP here (i.e. dropping this hunk).
> >>
> >> IIRC, it was the behaviour of certain hypervisors when Spectre hit, so
> >> all guests ought to cope.  (And indeed, not try to redundantly set the
> >> bit to start with).
> > It seems like OpenBSD will try to do so unconditionally, see:
> >
> > https://www.illumos.org/issues/12998
> >
> > According to the report there returning #GP when trying to WRMSR
> > DE_CFG will cause OpenBSD to panic, so I think we need to keep this
> > behavior of silently dropping writes.
> 
> /sigh - there is always one.  Comment please, and lets leave it as
> write-discard.
> 
> As for the vendor-ness, drop the checks to just cp->x86_vendor.  There
> is no boot_cpu_data interaction how that you've taken the rdmsr() out.

Sure, will wait for comments on other patches before sending the
updated version.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-08-20 15:08 ` [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
@ 2020-08-27 15:03   ` Jan Beulich
  2020-08-31 14:37     ` Roger Pau Monné
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2020-08-27 15:03 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Andrew Cooper, Wei Liu

On 20.08.2020 17:08, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1917,6 +1917,21 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>              goto gpf;
>          break;
>  
> +    case MSR_K8_TOP_MEM1:
> +    case MSR_K8_TOP_MEM2:
> +        *msr_content = 0;
> +        break;

Any reason you don't fold this with ...

> +    case MSR_K8_SYSCFG:
> +        /*
> +         * Return MtrrFixDramEn: albeit the current emulated MTRR
> +         * implementation doesn't support the Extended Type-Field Format having
> +         * such bit set is common on AMD hardware and is harmless as long as
> +         * MtrrFixDramModEn isn't set.
> +         */
> +        *msr_content = K8_MTRRFIXRANGE_DRAM_ENABLE;
> +        break;
> +
>      case MSR_K8_VM_CR:
>          *msr_content = 0;
>          break;

... this existing case, and ...

> @@ -2094,6 +2109,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>              goto gpf;
>          break;
>  
> +    case MSR_K8_TOP_MEM1:
> +    case MSR_K8_TOP_MEM2:
> +    case MSR_K8_SYSCFG:
> +        /* Drop writes. */
> +        break;
> +
>      case MSR_K8_VM_CR:
>          /* ignore write. handle all bits as read-only. */
>          break;

... similarly these?

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 4/8] x86/svm: drop writes to BU_CFG on revF chips
  2020-08-20 15:08 ` [PATCH v2 4/8] x86/svm: drop writes to BU_CFG on revF chips Roger Pau Monne
@ 2020-08-27 15:42   ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2020-08-27 15:42 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Andrew Cooper, Wei Liu

On 20.08.2020 17:08, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2125,6 +2125,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          nsvm->ns_msr_hsavepa = msr_content;
>          break;
>  
> +    case MSR_F10_BU_CFG:
> +        /* See comment in svm_msr_read_intercept. */
> +        if ( boot_cpu_data.x86 != 0xf )
> +            goto gpf;
> +        break;

This isn't how I understand the code and comment there: The
dropping of writes applies to all families (and this being a
Fam10 MSR, in particular Fam10). What the code there does is
cover for the #GP that Xen received because the %rdi key
wasn't correct. There wouldn't have been a #GP on Fam10.

Newer families (didn't check yet where the boundary is) don't
support this MSR anymore as per the BKDG (looking at Fam15's
only right now). This may nevertheless still mean the CPUs
return zero on reads and discard writes; we would want to mimic
that behavior if so.

And then, whatever the final behavior here, I guess we'd want
to mirror it to the behavior for BU_CFG2, except perhaps for
the Fam0F special case.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-08-20 15:08 ` [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
@ 2020-08-27 15:53   ` Jan Beulich
  2020-08-31 15:12     ` Roger Pau Monné
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2020-08-27 15:53 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: xen-devel, Jun Nakajima, Kevin Tian, Andrew Cooper, Wei Liu

On 20.08.2020 17:08, Roger Pau Monne wrote:
> @@ -181,6 +182,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>          /* Not offered to guests. */
>          goto gp_fault;
>  
> +    case MSR_IA32_FEATURE_CONTROL:
> +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
> +            goto gp_fault;

Can we really do it this way round, rather than raising #GP when
we know the MSR isn't there (AMD / Hygon)? I realized code e.g.
...

> +        *val = IA32_FEATURE_CONTROL_LOCK;
> +        if ( vmce_has_lmce(v) )
> +            *val |= IA32_FEATURE_CONTROL_LMCE_ON;
> +        if ( nestedhvm_enabled(d) )
> +            *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> +        break;
> +
> +
>      case MSR_IA32_PLATFORM_ID:
>          if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
>               !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )

... in context right here does it the same way, but I still
wonder whether we wouldn't better switch existing instances, too.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 6/8] x86/pv: disallow access to unknown MSRs
  2020-08-20 15:08 ` [PATCH v2 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
@ 2020-08-28  8:45   ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2020-08-28  8:45 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Andrew Cooper, Wei Liu

On 20.08.2020 17:08, Roger Pau Monne wrote:
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -972,9 +972,10 @@ static int read_msr(unsigned int reg, uint64_t *val,
>          }
>          /* fall through */
>      default:
> +        gdprintk(XENLOG_WARNING, "RDMSR 0x%08x unimplemented\n", reg);
> +        break;
> +
>      normal:
> -        /* Everyone can read the MSR space. */
> -        /* gdprintk(XENLOG_WARNING, "Domain attempted RDMSR %08x\n", reg); */
>          if ( rdmsr_safe(reg, *val) )
>              break;
>          return X86EMUL_OKAY;
> @@ -1141,14 +1142,15 @@ static int write_msr(unsigned int reg, uint64_t val,
>          }
>          /* fall through */
>      default:
> -        if ( rdmsr_safe(reg, temp) )
> -            break;
> +        gdprintk(XENLOG_WARNING,
> +                 "WRMSR 0x%08x val 0x%016"PRIx64" unimplemented\n",
> +                 reg, val);
> +        break;
>  
> -        if ( val != temp )
>      invalid:
> -            gdprintk(XENLOG_WARNING,
> -                     "Domain attempted WRMSR %08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
> -                     reg, temp, val);
> +        gdprintk(XENLOG_WARNING,
> +                 "Domain attempted WRMSR %08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
> +                 reg, temp, val);
>          return X86EMUL_OKAY;
>      }

There some odd mix of whether or no 0x prefixes get logged. I'd
advocate for dropping all of them, but imo at the very least all
three messages should be consistent in this regard.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 7/8] x86/hvm: Disallow access to unknown MSRs
  2020-08-20 15:08 ` [PATCH v2 7/8] x86/hvm: Disallow " Roger Pau Monne
@ 2020-08-28  8:51   ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2020-08-28  8:51 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: xen-devel, Andrew Cooper, Wei Liu, Jun Nakajima, Kevin Tian

On 20.08.2020 17:08, Roger Pau Monne wrote:
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Change the catch-all behavior for MSR not explicitly handled. Instead
> of allow full read-access to the MSR space and silently dropping
> writes return an exception when the MSR is not explicitly handled.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> [remove rdmsr_safe from default case in svm_msr_read_intercept]
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

While within this patch use of 0x in log messages is consistent,
ultimately this one wants to follow whatever gets done in patch 6.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr()
  2020-08-20 15:08 ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne
@ 2020-08-28  8:55   ` Jan Beulich
  2020-08-31 15:22     ` Roger Pau Monné
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2020-08-28  8:55 UTC (permalink / raw)
  To: Roger Pau Monne, Andrew Cooper; +Cc: xen-devel, Wei Liu

On 20.08.2020 17:08, Roger Pau Monne wrote:
> Now that the main PV/HVM MSR handlers raise #GP for all unknown MSRs, there is
> no need to special case these MSRs any more.

I agree, yet I'd like to point out (based on past backporting of
security fixes) that this change may end up being a good source
of backporting mistakes, as 4.14 may need chunks added here which
on master / staging have no place anymore.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-08-27 15:03   ` Jan Beulich
@ 2020-08-31 14:37     ` Roger Pau Monné
  2020-08-31 14:45       ` Roger Pau Monné
  2020-08-31 15:20       ` Jan Beulich
  0 siblings, 2 replies; 26+ messages in thread
From: Roger Pau Monné @ 2020-08-31 14:37 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu

On Thu, Aug 27, 2020 at 05:03:50PM +0200, Jan Beulich wrote:
> On 20.08.2020 17:08, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -1917,6 +1917,21 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >              goto gpf;
> >          break;
> >  
> > +    case MSR_K8_TOP_MEM1:
> > +    case MSR_K8_TOP_MEM2:
> > +        *msr_content = 0;
> > +        break;
> 
> Any reason you don't fold this with ...
> 
> > +    case MSR_K8_SYSCFG:
> > +        /*
> > +         * Return MtrrFixDramEn: albeit the current emulated MTRR
> > +         * implementation doesn't support the Extended Type-Field Format having
> > +         * such bit set is common on AMD hardware and is harmless as long as
> > +         * MtrrFixDramModEn isn't set.
> > +         */
> > +        *msr_content = K8_MTRRFIXRANGE_DRAM_ENABLE;
> > +        break;
> > +
> >      case MSR_K8_VM_CR:
> >          *msr_content = 0;
> >          break;
> 
> ... this existing case, and ...

I was trying to sort them by value, but I can certainly merge this and
the case below.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-08-31 14:37     ` Roger Pau Monné
@ 2020-08-31 14:45       ` Roger Pau Monné
  2020-08-31 15:21         ` Jan Beulich
  2020-08-31 15:20       ` Jan Beulich
  1 sibling, 1 reply; 26+ messages in thread
From: Roger Pau Monné @ 2020-08-31 14:45 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu

On Mon, Aug 31, 2020 at 04:37:47PM +0200, Roger Pau Monné wrote:
> On Thu, Aug 27, 2020 at 05:03:50PM +0200, Jan Beulich wrote:
> > On 20.08.2020 17:08, Roger Pau Monne wrote:
> > > --- a/xen/arch/x86/hvm/svm/svm.c
> > > +++ b/xen/arch/x86/hvm/svm/svm.c
> > > @@ -1917,6 +1917,21 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> > >              goto gpf;
> > >          break;
> > >  
> > > +    case MSR_K8_TOP_MEM1:
> > > +    case MSR_K8_TOP_MEM2:
> > > +        *msr_content = 0;
> > > +        break;
> > 
> > Any reason you don't fold this with ...
> > 
> > > +    case MSR_K8_SYSCFG:
> > > +        /*
> > > +         * Return MtrrFixDramEn: albeit the current emulated MTRR
> > > +         * implementation doesn't support the Extended Type-Field Format having
> > > +         * such bit set is common on AMD hardware and is harmless as long as
> > > +         * MtrrFixDramModEn isn't set.
> > > +         */
> > > +        *msr_content = K8_MTRRFIXRANGE_DRAM_ENABLE;

On the previous version you commented that returning 0 here would be
more correct, do you still think so?

I agree it seems better to not report any of those MTRR AMD specific
features, since we don't implement them in our emulated MTRR code.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-08-27 15:53   ` Jan Beulich
@ 2020-08-31 15:12     ` Roger Pau Monné
  2020-08-31 15:25       ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monné @ 2020-08-31 15:12 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Jun Nakajima, Kevin Tian, Andrew Cooper, Wei Liu

On Thu, Aug 27, 2020 at 05:53:16PM +0200, Jan Beulich wrote:
> On 20.08.2020 17:08, Roger Pau Monne wrote:
> > @@ -181,6 +182,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >          /* Not offered to guests. */
> >          goto gp_fault;
> >  
> > +    case MSR_IA32_FEATURE_CONTROL:
> > +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
> > +            goto gp_fault;
> 
> Can we really do it this way round, rather than raising #GP when
> we know the MSR isn't there (AMD / Hygon)? I realized code e.g.
> ...
> 
> > +        *val = IA32_FEATURE_CONTROL_LOCK;
> > +        if ( vmce_has_lmce(v) )
> > +            *val |= IA32_FEATURE_CONTROL_LMCE_ON;
> > +        if ( nestedhvm_enabled(d) )
> > +            *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> > +        break;
> > +
> > +
> >      case MSR_IA32_PLATFORM_ID:
> >          if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
> >               !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )
> 
> ... in context right here does it the same way, but I still
> wonder whether we wouldn't better switch existing instances, too.

Hm, no idea really. Right now it seems better to check for != Intel
rather than AMD | Hygon | Centaur | Shanghai, as that's a MSR specific
to Intel.

Do those MSRs exist in Centaur / Shanghai?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-08-31 14:37     ` Roger Pau Monné
  2020-08-31 14:45       ` Roger Pau Monné
@ 2020-08-31 15:20       ` Jan Beulich
  1 sibling, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2020-08-31 15:20 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Wei Liu

On 31.08.2020 16:37, Roger Pau Monné wrote:
> On Thu, Aug 27, 2020 at 05:03:50PM +0200, Jan Beulich wrote:
>> On 20.08.2020 17:08, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -1917,6 +1917,21 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>>              goto gpf;
>>>          break;
>>>  
>>> +    case MSR_K8_TOP_MEM1:
>>> +    case MSR_K8_TOP_MEM2:
>>> +        *msr_content = 0;
>>> +        break;
>>
>> Any reason you don't fold this with ...
>>
>>> +    case MSR_K8_SYSCFG:
>>> +        /*
>>> +         * Return MtrrFixDramEn: albeit the current emulated MTRR
>>> +         * implementation doesn't support the Extended Type-Field Format having
>>> +         * such bit set is common on AMD hardware and is harmless as long as
>>> +         * MtrrFixDramModEn isn't set.
>>> +         */
>>> +        *msr_content = K8_MTRRFIXRANGE_DRAM_ENABLE;
>>> +        break;
>>> +
>>>      case MSR_K8_VM_CR:
>>>          *msr_content = 0;
>>>          break;
>>
>> ... this existing case, and ...
> 
> I was trying to sort them by value, but I can certainly merge this and
> the case below.

Sorting by number is helpful as a secondary criteria, but I think groups
of registers wanting to be handled the same should go together. This is
especially looking forward, where otherwise many instances of the same
(trivial or not) logic may appear.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs
  2020-08-31 14:45       ` Roger Pau Monné
@ 2020-08-31 15:21         ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2020-08-31 15:21 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Wei Liu

On 31.08.2020 16:45, Roger Pau Monné wrote:
> On Mon, Aug 31, 2020 at 04:37:47PM +0200, Roger Pau Monné wrote:
>> On Thu, Aug 27, 2020 at 05:03:50PM +0200, Jan Beulich wrote:
>>> On 20.08.2020 17:08, Roger Pau Monne wrote:
>>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>>> @@ -1917,6 +1917,21 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>>>              goto gpf;
>>>>          break;
>>>>  
>>>> +    case MSR_K8_TOP_MEM1:
>>>> +    case MSR_K8_TOP_MEM2:
>>>> +        *msr_content = 0;
>>>> +        break;
>>>
>>> Any reason you don't fold this with ...
>>>
>>>> +    case MSR_K8_SYSCFG:
>>>> +        /*
>>>> +         * Return MtrrFixDramEn: albeit the current emulated MTRR
>>>> +         * implementation doesn't support the Extended Type-Field Format having
>>>> +         * such bit set is common on AMD hardware and is harmless as long as
>>>> +         * MtrrFixDramModEn isn't set.
>>>> +         */
>>>> +        *msr_content = K8_MTRRFIXRANGE_DRAM_ENABLE;
> 
> On the previous version you commented that returning 0 here would be
> more correct, do you still think so?

I do, but I'm still hoping to either get Andrew to agree (iirc it was
him to suggest the value above), or for me to understand why he's
wanting it this way.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr()
  2020-08-28  8:55   ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr() Jan Beulich
@ 2020-08-31 15:22     ` Roger Pau Monné
  0 siblings, 0 replies; 26+ messages in thread
From: Roger Pau Monné @ 2020-08-31 15:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, xen-devel, Wei Liu

On Fri, Aug 28, 2020 at 10:55:29AM +0200, Jan Beulich wrote:
> On 20.08.2020 17:08, Roger Pau Monne wrote:
> > Now that the main PV/HVM MSR handlers raise #GP for all unknown MSRs, there is
> > no need to special case these MSRs any more.
> 
> I agree, yet I'd like to point out (based on past backporting of
> security fixes) that this change may end up being a good source
> of backporting mistakes, as 4.14 may need chunks added here which
> on master / staging have no place anymore.

Even if we leave this chunk out, patches for > 4.14 won't need to add
anything to make a MSR return a #GP, so it's likely that the chunk
will have to be added for backports anyway.

I really have no idea of how to help with backporting with such change
in place.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR
  2020-08-31 15:12     ` Roger Pau Monné
@ 2020-08-31 15:25       ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2020-08-31 15:25 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Jun Nakajima, Kevin Tian, Andrew Cooper, Wei Liu

On 31.08.2020 17:12, Roger Pau Monné wrote:
> On Thu, Aug 27, 2020 at 05:53:16PM +0200, Jan Beulich wrote:
>> On 20.08.2020 17:08, Roger Pau Monne wrote:
>>> @@ -181,6 +182,18 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>>          /* Not offered to guests. */
>>>          goto gp_fault;
>>>  
>>> +    case MSR_IA32_FEATURE_CONTROL:
>>> +        if ( !(cp->x86_vendor & X86_VENDOR_INTEL) )
>>> +            goto gp_fault;
>>
>> Can we really do it this way round, rather than raising #GP when
>> we know the MSR isn't there (AMD / Hygon)? I realized code e.g.
>> ...
>>
>>> +        *val = IA32_FEATURE_CONTROL_LOCK;
>>> +        if ( vmce_has_lmce(v) )
>>> +            *val |= IA32_FEATURE_CONTROL_LMCE_ON;
>>> +        if ( nestedhvm_enabled(d) )
>>> +            *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
>>> +        break;
>>> +
>>> +
>>>      case MSR_IA32_PLATFORM_ID:
>>>          if ( !(cp->x86_vendor & X86_VENDOR_INTEL) ||
>>>               !(boot_cpu_data.x86_vendor & X86_VENDOR_INTEL) )
>>
>> ... in context right here does it the same way, but I still
>> wonder whether we wouldn't better switch existing instances, too.
> 
> Hm, no idea really. Right now it seems better to check for != Intel
> rather than AMD | Hygon | Centaur | Shanghai, as that's a MSR specific
> to Intel.
> 
> Do those MSRs exist in Centaur / Shanghai?

I can't tell for sure, but I suppose they do, as they're (in a way)
cloning Intel's implementation. My thinking here is that by not
exposing an MSR when we should we potentially cause guests to crash.
Whereas by exposing an MSR that ought not be there fallout would
result only if the vendor actually has something else at that index.
And I'd hope vendors apply some care to avoid doing so.

Jan


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2020-08-31 15:25 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-20 15:08 [PATCH v2 0/8] x86: switch default MSR behavior Roger Pau Monne
2020-08-20 15:08 ` [PATCH v2 1/8] x86/vmx: handle writes to MISC_ENABLE MSR Roger Pau Monne
2020-08-20 15:08 ` [PATCH v2 2/8] x86/svm: silently drop writes to SYSCFG and related MSRs Roger Pau Monne
2020-08-27 15:03   ` Jan Beulich
2020-08-31 14:37     ` Roger Pau Monné
2020-08-31 14:45       ` Roger Pau Monné
2020-08-31 15:21         ` Jan Beulich
2020-08-31 15:20       ` Jan Beulich
2020-08-20 15:08 ` [PATCH v2 3/8] x86/msr: explicitly handle AMD DE_CFG Roger Pau Monne
2020-08-20 17:08   ` Andrew Cooper
2020-08-21 11:52     ` Roger Pau Monné
2020-08-21 14:03       ` Andrew Cooper
2020-08-21 14:09         ` Roger Pau Monné
2020-08-20 15:08 ` [PATCH v2 4/8] x86/svm: drop writes to BU_CFG on revF chips Roger Pau Monne
2020-08-27 15:42   ` Jan Beulich
2020-08-20 15:08 ` [PATCH v2 5/8] x86/pv: allow reading FEATURE_CONTROL MSR Roger Pau Monne
2020-08-27 15:53   ` Jan Beulich
2020-08-31 15:12     ` Roger Pau Monné
2020-08-31 15:25       ` Jan Beulich
2020-08-20 15:08 ` [PATCH v2 6/8] x86/pv: disallow access to unknown MSRs Roger Pau Monne
2020-08-28  8:45   ` Jan Beulich
2020-08-20 15:08 ` [PATCH v2 7/8] x86/hvm: Disallow " Roger Pau Monne
2020-08-28  8:51   ` Jan Beulich
2020-08-20 15:08 ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd, wr}msr() Roger Pau Monne
2020-08-28  8:55   ` [PATCH v2 8/8] x86/msr: Drop compatibility #GP handling in guest_{rd,wr}msr() Jan Beulich
2020-08-31 15:22     ` Roger Pau Monné

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.