All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] x86/msr: Introductory MSR cleanup
@ 2018-06-26 13:18 Andrew Cooper
  2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
                   ` (6 more replies)
  0 siblings, 7 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

This series is a combination of fragments already posted, and cleanup work I
did while travelling in/around XenSummit.

There is no real functional change, but there is a lot of code volume
reduction and consistency improvements.  This is the start of a very large
amount of MSR improvement work.

Andrew Cooper (6):
  x86/msr: Clean up the MSR_EFER constants
  x86/msr: Cleanup of misc constants
  x86/msr: Clean up the MSR_{PLATFORM_INFO,MISC_FEATURES_ENABLES} constants
  x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  x86/msr: Clean up the MSR_APIC_BASE constants
  x86/msr: Clean up the x2APIC MSR constants

 xen/arch/x86/apic.c                    |  66 +++++++-------
 xen/arch/x86/boot/trampoline.S         |   2 +-
 xen/arch/x86/boot/wakeup.S             |   7 +-
 xen/arch/x86/cpu/common.c              |  16 ++--
 xen/arch/x86/cpu/intel.c               |   2 +-
 xen/arch/x86/cpu/mcheck/mce_intel.c    |   7 +-
 xen/arch/x86/cpu/mcheck/vmce.c         |   2 +-
 xen/arch/x86/cpu/mwait-idle.c          |   4 +-
 xen/arch/x86/domctl.c                  |   4 +-
 xen/arch/x86/efi/efi-boot.h            |   2 +-
 xen/arch/x86/genapic/x2apic.c          |   4 +-
 xen/arch/x86/hvm/hvm.c                 |  16 ++--
 xen/arch/x86/hvm/vlapic.c              |  19 ++--
 xen/arch/x86/hvm/vmx/vmcs.c            |  31 +++----
 xen/arch/x86/hvm/vmx/vmx.c             |  33 ++++---
 xen/arch/x86/msr.c                     |  30 +++----
 xen/arch/x86/spec_ctrl.c               |   8 +-
 xen/arch/x86/x86_emulate/x86_emulate.c |   6 +-
 xen/include/asm-x86/cpufeature.h       |   1 +
 xen/include/asm-x86/hvm/hvm.h          |   2 +-
 xen/include/asm-x86/hvm/vlapic.h       |   6 +-
 xen/include/asm-x86/msr-index.h        | 156 +++++++++++++++++----------------
 xen/include/asm-x86/msr.h              |   4 +-
 23 files changed, 212 insertions(+), 216 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
@ 2018-06-26 13:18 ` Andrew Cooper
  2018-06-26 15:33   ` Wei Liu
                     ` (2 more replies)
  2018-06-26 13:18 ` [PATCH 2/6] x86/msr: Cleanup of misc constants Andrew Cooper
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The bit position constants are only used by the trampoline asm, but the
code is shorter and clearer when using the mask constants.  This halves
the number of constants used.

Consistently use _AC() for the bit constants, and start to use spaces
for indentation.  Furthermore, EFER contains the NX-Enable bit, to
rename the constant to EFER_NXE to match both the AMD and Intel specs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/boot/trampoline.S  |  2 +-
 xen/arch/x86/boot/wakeup.S      |  7 +++----
 xen/arch/x86/cpu/intel.c        |  2 +-
 xen/arch/x86/efi/efi-boot.h     |  2 +-
 xen/arch/x86/hvm/hvm.c          |  4 ++--
 xen/include/asm-x86/hvm/hvm.h   |  2 +-
 xen/include/asm-x86/msr-index.h | 33 ++++++++++++---------------------
 7 files changed, 21 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 5588c79..7e77e61e 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -138,7 +138,7 @@ trampoline_protmode_entry:
         or      $EFER_LME|EFER_SCE,%eax   /* Long Mode + SYSCALL/SYSRET */
         bt      $cpufeat_bit(X86_FEATURE_NX),%edi /* No Execute? */
         jnc     1f
-        btsl    $_EFER_NX,%eax  /* No Execute     */
+        or      $EFER_NXE, %eax /* No Execute */
 1:      wrmsr
 
         mov     $(X86_CR0_PG | X86_CR0_AM | X86_CR0_WP | X86_CR0_NE |\
diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch/x86/boot/wakeup.S
index f9632ee..a37680b 100644
--- a/xen/arch/x86/boot/wakeup.S
+++ b/xen/arch/x86/boot/wakeup.S
@@ -144,11 +144,10 @@ wakeup_32:
         jz      .Lskip_eferw
         movl    $MSR_EFER,%ecx
         rdmsr
-        btsl    $_EFER_LME,%eax /* Long Mode      */
-        btsl    $_EFER_SCE,%eax /* SYSCALL/SYSRET */
-        btl     $20,%edi        /* No Execute?    */
+        or      $EFER_LME | EFER_SCE, %eax /* Long Mode + SYSCALL/SYSRET */
+        bt      $cpufeat_bit(X86_FEATURE_NX), %edi /* No Execute? */
         jnc     1f
-        btsl    $_EFER_NX,%eax  /* No Execute     */
+        or      $EFER_NXE, %eax  /* No Execute */
 1:      wrmsr
 .Lskip_eferw:
 
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 9477965..f3b04fe 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -247,7 +247,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 		printk(KERN_INFO "revised cpuid level: %d\n",
 		       cpuid_eax(0));
 	if (disable & MSR_IA32_MISC_ENABLE_XD_DISABLE) {
-		write_efer(read_efer() | EFER_NX);
+		write_efer(read_efer() | EFER_NXE);
 		printk(KERN_INFO
 		       "re-enabled NX (Execute Disable) protection\n");
 	}
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 5789d2c..09bb3f4 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -239,7 +239,7 @@ static void __init noreturn efi_arch_post_exit_boot(void)
     rdmsrl(MSR_EFER, efer);
     efer |= EFER_SCE;
     if ( cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX) )
-        efer |= EFER_NX;
+        efer |= EFER_NXE;
     wrmsrl(MSR_EFER, efer);
     write_cr0(X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP |
               X86_CR0_AM | X86_CR0_PG);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c23983c..4e247d0 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -916,8 +916,8 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
     if ( (value & EFER_LMA) && (!(value & EFER_LME) || !cr0_pg) )
         return "LMA/LME/CR0.PG inconsistency";
 
-    if ( (value & EFER_NX) && !p->extd.nx )
-        return "NX without feature";
+    if ( (value & EFER_NXE) && !p->extd.nx )
+        return "NXE without feature";
 
     if ( (value & EFER_SVME) && (!p->extd.svm || !nestedhvm_enabled(d)) )
         return "SVME without nested virt";
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index ef5e198..c9511c3 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -299,7 +299,7 @@ int hvm_girq_dest_2_vcpu_id(struct domain *d, uint8_t dest, uint8_t dest_mode);
 /* HVM guests on Intel hardware leak Xen's NX settings into guest context. */
 #define hvm_nx_enabled(v) \
     ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && cpu_has_nx) ||    \
-     ((v)->arch.hvm_vcpu.guest_efer & EFER_NX))
+     ((v)->arch.hvm_vcpu.guest_efer & EFER_NXE))
 #define hvm_pku_enabled(v) \
     (hvm_paging_enabled(v) && ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PKE))
 
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 8fbccc8..1b10f3e 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -4,7 +4,6 @@
 /* CPU model specific register (MSR) numbers */
 
 /* x86-64 specific MSRs */
-#define MSR_EFER		0xc0000080 /* extended feature register */
 #define MSR_STAR		0xc0000081 /* legacy mode SYSCALL target */
 #define MSR_LSTAR		0xc0000082 /* long mode SYSCALL target */
 #define MSR_CSTAR		0xc0000083 /* compat mode SYSCALL target */
@@ -14,26 +13,6 @@
 #define MSR_SHADOW_GS_BASE	0xc0000102 /* SwapGS GS shadow */
 #define MSR_TSC_AUX		0xc0000103 /* Auxiliary TSC */
 
-/* EFER bits: */
-#define _EFER_SCE		0  /* SYSCALL/SYSRET */
-#define _EFER_LME		8  /* Long mode enable */
-#define _EFER_LMA		10 /* Long mode active (read-only) */
-#define _EFER_NX		11 /* No execute enable */
-#define _EFER_SVME		12 /* AMD: SVM enable */
-#define _EFER_LMSLE		13 /* AMD: Long-mode segment limit enable */
-#define _EFER_FFXSE		14 /* AMD: Fast FXSAVE/FXRSTOR enable */
-
-#define EFER_SCE		(1<<_EFER_SCE)
-#define EFER_LME		(1<<_EFER_LME)
-#define EFER_LMA		(1<<_EFER_LMA)
-#define EFER_NX			(1<<_EFER_NX)
-#define EFER_SVME		(1<<_EFER_SVME)
-#define EFER_LMSLE		(1<<_EFER_LMSLE)
-#define EFER_FFXSE		(1<<_EFER_FFXSE)
-
-#define EFER_KNOWN_MASK		(EFER_SCE | EFER_LME | EFER_LMA | EFER_NX | \
-				 EFER_SVME | EFER_LMSLE | EFER_FFXSE)
-
 /* Speculation Controls. */
 #define MSR_SPEC_CTRL			0x00000048
 #define SPEC_CTRL_IBRS			(_AC(1, ULL) << 0)
@@ -49,6 +28,18 @@
 #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 
+#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
+#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
+#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
+#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
+#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
+#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
+#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
+#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
+
+#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
+                         EFER_SVME | EFER_LMSLE | EFER_FFXSE)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 2/6] x86/msr: Cleanup of misc constants
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
  2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
@ 2018-06-26 13:18 ` Andrew Cooper
  2018-06-26 15:43   ` Wei Liu
  2018-06-27 10:48   ` Roger Pau Monné
  2018-06-26 13:18 ` [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants Andrew Cooper
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Begin the process of cleaning up msr-index.h.  Order the MSRs at the
head of the file by index, use spaces for indentation, _AC() for bit
positions, and add a comment describing the expected style.  Abbreviate
the ARCH_CAPS_* constants to reduce code volume.

Leave a trailing comment to logically split the file in two, between the
now-consistent head, and legacy tail which will be improved moving
forwards.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/spec_ctrl.c        |  8 +++---
 xen/include/asm-x86/msr-index.h | 59 ++++++++++++++++++++++++++---------------
 2 files changed, 41 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 08e6784..4db2ae0 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -215,8 +215,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_SSBD))  ? " SSBD"      : "",
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB))  ? " IBPB"      : "",
-           (caps & ARCH_CAPABILITIES_IBRS_ALL)      ? " IBRS_ALL"  : "",
-           (caps & ARCH_CAPABILITIES_RDCL_NO)       ? " RDCL_NO"   : "",
+           (caps & ARCH_CAPS_IBRS_ALL)              ? " IBRS_ALL"  : "",
+           (caps & ARCH_CAPS_RDCL_NO)               ? " RDCL_NO"   : "",
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "");
 
@@ -429,11 +429,11 @@ static __init void xpti_init_default(bool force)
         return;
 
     if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
-        caps = ARCH_CAPABILITIES_RDCL_NO;
+        caps = ARCH_CAPS_RDCL_NO;
     else if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
         rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
-    if ( caps & ARCH_CAPABILITIES_RDCL_NO )
+    if ( caps & ARCH_CAPS_RDCL_NO )
         opt_xpti = 0;
     else
         opt_xpti = OPT_XPTI_DOM0 | OPT_XPTI_DOMU;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 1b10f3e..2c9b75f 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -1,32 +1,34 @@
 #ifndef __ASM_MSR_INDEX_H
 #define __ASM_MSR_INDEX_H
 
-/* CPU model specific register (MSR) numbers */
-
-/* x86-64 specific MSRs */
-#define MSR_STAR		0xc0000081 /* legacy mode SYSCALL target */
-#define MSR_LSTAR		0xc0000082 /* long mode SYSCALL target */
-#define MSR_CSTAR		0xc0000083 /* compat mode SYSCALL target */
-#define MSR_SYSCALL_MASK	0xc0000084 /* EFLAGS mask for syscall */
-#define MSR_FS_BASE		0xc0000100 /* 64bit FS base */
-#define MSR_GS_BASE		0xc0000101 /* 64bit GS base */
-#define MSR_SHADOW_GS_BASE	0xc0000102 /* SwapGS GS shadow */
-#define MSR_TSC_AUX		0xc0000103 /* Auxiliary TSC */
+/*
+ * CPU model specific register (MSR) numbers
+ *
+ * Definitions for an MSR should follow this style:
+ *
+ * #define MSR_$NAME                    0x$INDEX
+ * #define $NAME_$BIT1                  (_AC(1, ULL) << $POS1)
+ * #define $NAME_$BIT2                  (_AC(1, ULL) << $POS2)
+ *
+ * Blocks of related constants should be sorted by MSR index.  The constant
+ * names should be as concise as possible, and the bit names may have an
+ * abbreviated name.
+ */
 
 /* Speculation Controls. */
-#define MSR_SPEC_CTRL			0x00000048
-#define SPEC_CTRL_IBRS			(_AC(1, ULL) << 0)
-#define SPEC_CTRL_STIBP			(_AC(1, ULL) << 1)
-#define SPEC_CTRL_SSBD			(_AC(1, ULL) << 2)
+#define MSR_SPEC_CTRL                   0x00000048
+#define SPEC_CTRL_IBRS                  (_AC(1, ULL) <<  0)
+#define SPEC_CTRL_STIBP                 (_AC(1, ULL) <<  1)
+#define SPEC_CTRL_SSBD                  (_AC(1, ULL) <<  2)
 
-#define MSR_PRED_CMD			0x00000049
-#define PRED_CMD_IBPB			(_AC(1, ULL) << 0)
+#define MSR_PRED_CMD                    0x00000049
+#define PRED_CMD_IBPB                   (_AC(1, ULL) <<  0)
 
-#define MSR_ARCH_CAPABILITIES		0x0000010a
-#define ARCH_CAPABILITIES_RDCL_NO	(_AC(1, ULL) << 0)
-#define ARCH_CAPABILITIES_IBRS_ALL	(_AC(1, ULL) << 1)
-#define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
-#define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
+#define MSR_ARCH_CAPABILITIES           0x0000010a
+#define ARCH_CAPS_RDCL_NO               (_AC(1, ULL) <<  0)
+#define ARCH_CAPS_IBRS_ALL              (_AC(1, ULL) <<  1)
+#define ARCH_CAPS_RSBA                  (_AC(1, ULL) <<  2)
+#define ARCH_CAPS_SSB_NO                (_AC(1, ULL) <<  4)
 
 #define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
 #define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
@@ -40,6 +42,19 @@
 #define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
                          EFER_SVME | EFER_LMSLE | EFER_FFXSE)
 
+#define MSR_STAR                        0xc0000081 /* Legacy mode SYSCALL target */
+#define MSR_LSTAR                       0xc0000082 /* Long mode SYSCALL target */
+#define MSR_CSTAR                       0xc0000083 /* Compat mode SYSCALL target */
+#define MSR_SYSCALL_MASK                0xc0000084 /* EFLAGS mask for syscall */
+#define MSR_FS_BASE                     0xc0000100 /* 64bit FS base */
+#define MSR_GS_BASE                     0xc0000101 /* 64bit GS base */
+#define MSR_SHADOW_GS_BASE              0xc0000102 /* SwapGS GS shadow */
+#define MSR_TSC_AUX                     0xc0000103 /* Auxiliary TSC */
+
+/*
+ * Legacy MSR constants in need of cleanup.  No new code below this comment.
+ */
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
  2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
  2018-06-26 13:18 ` [PATCH 2/6] x86/msr: Cleanup of misc constants Andrew Cooper
@ 2018-06-26 13:18 ` Andrew Cooper
  2018-06-26 16:31   ` Wei Liu
                     ` (2 more replies)
  2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
                   ` (3 subsequent siblings)
  6 siblings, 3 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

These MSRs, while being Intel specific, are used to offer virtualised
CPUID faulting support on AMD hardware, so remove the INTEL infix.

The bit position constants are used by guest_rdmsr(), but the logic can
be expressed using MASK_INSR() which allows the removal of the bit
position constants.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/common.c              | 16 +++++++---------
 xen/arch/x86/domctl.c                  |  4 ++--
 xen/arch/x86/hvm/hvm.c                 |  4 ++--
 xen/arch/x86/msr.c                     | 30 +++++++++++++++---------------
 xen/arch/x86/x86_emulate/x86_emulate.c |  6 +++---
 xen/include/asm-x86/msr-index.h        | 15 ++++++---------
 xen/include/asm-x86/msr.h              |  4 ++--
 7 files changed, 37 insertions(+), 42 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 3548b12..a83077f 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -115,19 +115,17 @@ bool __init probe_cpuid_faulting(void)
 	uint64_t val;
 	int rc;
 
-	if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)
+	if ((rc = rdmsr_safe(MSR_PLATFORM_INFO, val)) == 0)
 	{
 		struct msr_domain_policy *dp = &raw_msr_domain_policy;
 
 		dp->plaform_info.available = true;
-		if (val & MSR_PLATFORM_INFO_CPUID_FAULTING)
+		if (val & PLATFORM_INFO_CPUID_FAULTING)
 			dp->plaform_info.cpuid_faulting = true;
 	}
 
-	if (rc ||
-	    !(val & MSR_PLATFORM_INFO_CPUID_FAULTING) ||
-	    rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES,
-		       this_cpu(msr_misc_features)))
+	if (rc || !(val & PLATFORM_INFO_CPUID_FAULTING) ||
+	    rdmsr_safe(MSR_MISC_FEATURES_ENABLES, this_cpu(msr_misc_features)))
 	{
 		setup_clear_cpu_cap(X86_FEATURE_CPUID_FAULTING);
 		return false;
@@ -145,12 +143,12 @@ static void set_cpuid_faulting(bool enable)
 	uint64_t *this_misc_features = &this_cpu(msr_misc_features);
 	uint64_t val = *this_misc_features;
 
-	if (!!(val & MSR_MISC_FEATURES_CPUID_FAULTING) == enable)
+	if ((val & MISC_FEATURES_CPUID_FAULTING) == enable)
 		return;
 
-	val ^= MSR_MISC_FEATURES_CPUID_FAULTING;
+	val ^= MISC_FEATURES_CPUID_FAULTING;
 
-	wrmsrl(MSR_INTEL_MISC_FEATURES_ENABLES, val);
+	wrmsrl(MSR_MISC_FEATURES_ENABLES, val);
 	*this_misc_features = val;
 }
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..6bbde04 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1248,7 +1248,7 @@ long arch_do_domctl(
         struct vcpu *v;
         static const uint32_t msrs_to_send[] = {
             MSR_SPEC_CTRL,
-            MSR_INTEL_MISC_FEATURES_ENABLES,
+            MSR_MISC_FEATURES_ENABLES,
         };
         uint32_t nr_msrs = ARRAY_SIZE(msrs_to_send);
 
@@ -1372,7 +1372,7 @@ long arch_do_domctl(
                 switch ( msr.index )
                 {
                 case MSR_SPEC_CTRL:
-                case MSR_INTEL_MISC_FEATURES_ENABLES:
+                case MSR_MISC_FEATURES_ENABLES:
                     if ( guest_wrmsr(v, msr.index, msr.value) != X86EMUL_OKAY )
                         break;
                     continue;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4e247d0..5823620 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1335,7 +1335,7 @@ static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
 #define HVM_CPU_MSR_SIZE(cnt) offsetof(struct hvm_msr, msr[cnt])
 static const uint32_t msrs_to_send[] = {
     MSR_SPEC_CTRL,
-    MSR_INTEL_MISC_FEATURES_ENABLES,
+    MSR_MISC_FEATURES_ENABLES,
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
@@ -1471,7 +1471,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
             int rc;
 
         case MSR_SPEC_CTRL:
-        case MSR_INTEL_MISC_FEATURES_ENABLES:
+        case MSR_MISC_FEATURES_ENABLES:
             rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val);
 
             if ( rc != X86EMUL_OKAY )
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 1e12ccb..0162890 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -36,7 +36,7 @@ struct msr_vcpu_policy __read_mostly hvm_max_msr_vcpu_policy,
 
 static void __init calculate_raw_policy(void)
 {
-    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    /* 0x000000ce - MSR_PLATFORM_INFO */
     /* Was already added by probe_cpuid_faulting() */
 }
 
@@ -46,7 +46,7 @@ static void __init calculate_host_policy(void)
 
     *dp = raw_msr_domain_policy;
 
-    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    /* 0x000000ce - MSR_PLATFORM_INFO */
     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
     dp->plaform_info.cpuid_faulting = cpu_has_cpuid_faulting;
 }
@@ -61,7 +61,7 @@ static void __init calculate_hvm_max_policy(void)
 
     *dp = host_msr_domain_policy;
 
-    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    /* 0x000000ce - MSR_PLATFORM_INFO */
     /* It's always possible to emulate CPUID faulting for HVM guests */
     if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL ||
          boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
@@ -70,7 +70,7 @@ static void __init calculate_hvm_max_policy(void)
         dp->plaform_info.cpuid_faulting = true;
     }
 
-    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
+    /* 0x00000140 - MSR_MISC_FEATURES_ENABLES */
     vp->misc_features_enables.available = dp->plaform_info.cpuid_faulting;
 }
 
@@ -81,7 +81,7 @@ static void __init calculate_pv_max_policy(void)
 
     *dp = host_msr_domain_policy;
 
-    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
+    /* 0x00000140 - MSR_MISC_FEATURES_ENABLES */
     vp->misc_features_enables.available = dp->plaform_info.cpuid_faulting;
 }
 
@@ -159,22 +159,22 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = vp->spec_ctrl.raw;
         break;
 
-    case MSR_INTEL_PLATFORM_INFO:
+    case MSR_PLATFORM_INFO:
         if ( !dp->plaform_info.available )
             goto gp_fault;
-        *val = (uint64_t)dp->plaform_info.cpuid_faulting <<
-               _MSR_PLATFORM_INFO_CPUID_FAULTING;
+        *val = MASK_INSR(dp->plaform_info.cpuid_faulting,
+                         PLATFORM_INFO_CPUID_FAULTING);
         break;
 
     case MSR_ARCH_CAPABILITIES:
         /* Not implemented yet. */
         goto gp_fault;
 
-    case MSR_INTEL_MISC_FEATURES_ENABLES:
+    case MSR_MISC_FEATURES_ENABLES:
         if ( !vp->misc_features_enables.available )
             goto gp_fault;
-        *val = (uint64_t)vp->misc_features_enables.cpuid_faulting <<
-               _MSR_MISC_FEATURES_CPUID_FAULTING;
+        *val = MASK_INSR(vp->misc_features_enables.cpuid_faulting,
+                         MISC_FEATURES_CPUID_FAULTING);
         break;
 
     default:
@@ -199,7 +199,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     {
         uint64_t rsvd;
 
-    case MSR_INTEL_PLATFORM_INFO:
+    case MSR_PLATFORM_INFO:
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
         goto gp_fault;
@@ -254,7 +254,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             wrmsrl(MSR_PRED_CMD, val);
         break;
 
-    case MSR_INTEL_MISC_FEATURES_ENABLES:
+    case MSR_MISC_FEATURES_ENABLES:
     {
         bool old_cpuid_faulting = vp->misc_features_enables.cpuid_faulting;
 
@@ -263,13 +263,13 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
         rsvd = ~0ull;
         if ( dp->plaform_info.cpuid_faulting )
-            rsvd &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
+            rsvd &= ~MISC_FEATURES_CPUID_FAULTING;
 
         if ( val & rsvd )
             goto gp_fault;
 
         vp->misc_features_enables.cpuid_faulting =
-            val & MSR_MISC_FEATURES_CPUID_FAULTING;
+            val & MISC_FEATURES_CPUID_FAULTING;
 
         if ( v == curr && is_hvm_domain(d) && cpu_has_cpuid_faulting &&
              (old_cpuid_faulting ^ vp->misc_features_enables.cpuid_faulting) )
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index e372c4b..1a84c90 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6525,9 +6525,9 @@ x86_emulate(
         msr_val = 0;
         fail_if(ops->cpuid == NULL);
 
-        /* Speculatively read MSR_INTEL_MISC_FEATURES_ENABLES. */
+        /* Speculatively read MSR_MISC_FEATURES_ENABLES. */
         if ( ops->read_msr && !mode_ring0() &&
-             (rc = ops->read_msr(MSR_INTEL_MISC_FEATURES_ENABLES,
+             (rc = ops->read_msr(MSR_MISC_FEATURES_ENABLES,
                                  &msr_val, ctxt)) == X86EMUL_EXCEPTION )
         {
             /* Not implemented.  Squash the exception and proceed normally. */
@@ -6537,7 +6537,7 @@ x86_emulate(
         if ( rc != X86EMUL_OKAY )
             goto done;
 
-        generate_exception_if((msr_val & MSR_MISC_FEATURES_CPUID_FAULTING),
+        generate_exception_if(msr_val & MISC_FEATURES_CPUID_FAULTING,
                               EXC_GP, 0); /* Faulting active? (Inc. CPL test) */
 
         rc = ops->cpuid(_regs.eax, _regs.ecx, &cpuid_leaf, ctxt);
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 2c9b75f..48d80e9 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -24,12 +24,18 @@
 #define MSR_PRED_CMD                    0x00000049
 #define PRED_CMD_IBPB                   (_AC(1, ULL) <<  0)
 
+#define MSR_PLATFORM_INFO               0x000000ce
+#define PLATFORM_INFO_CPUID_FAULTING    (_AC(1, ULL) << 31)
+
 #define MSR_ARCH_CAPABILITIES           0x0000010a
 #define ARCH_CAPS_RDCL_NO               (_AC(1, ULL) <<  0)
 #define ARCH_CAPS_IBRS_ALL              (_AC(1, ULL) <<  1)
 #define ARCH_CAPS_RSBA                  (_AC(1, ULL) <<  2)
 #define ARCH_CAPS_SSB_NO                (_AC(1, ULL) <<  4)
 
+#define MSR_MISC_FEATURES_ENABLES       0x00000140
+#define MISC_FEATURES_CPUID_FAULTING    (_AC(1, ULL) <<  0)
+
 #define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
 #define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
 #define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
@@ -534,15 +540,6 @@
 #define MSR_INTEL_MASK_V3_CPUID80000001 0x00000133
 #define MSR_INTEL_MASK_V3_CPUIDD_01     0x00000134
 
-/* Intel cpuid faulting MSRs */
-#define MSR_INTEL_PLATFORM_INFO		0x000000ce
-#define _MSR_PLATFORM_INFO_CPUID_FAULTING	31
-#define MSR_PLATFORM_INFO_CPUID_FAULTING	(1ULL << _MSR_PLATFORM_INFO_CPUID_FAULTING)
-
-#define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
-#define _MSR_MISC_FEATURES_CPUID_FAULTING	0
-#define MSR_MISC_FEATURES_CPUID_FAULTING	(1ULL << _MSR_MISC_FEATURES_CPUID_FAULTING)
-
 #define MSR_CC6_DEMOTION_POLICY_CONFIG	0x00000668
 #define MSR_MC6_DEMOTION_POLICY_CONFIG	0x00000669
 
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index afbeb7f..8f9f964 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -260,7 +260,7 @@ static inline void wrmsr_tsc_aux(uint32_t val)
 /* MSR policy object for shared per-domain MSRs */
 struct msr_domain_policy
 {
-    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    /* 0x000000ce - MSR_PLATFORM_INFO */
     struct {
         bool available; /* This MSR is non-architectural */
         bool cpuid_faulting;
@@ -288,7 +288,7 @@ struct msr_vcpu_policy
         uint32_t raw;
     } spec_ctrl;
 
-    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
+    /* 0x00000140 - MSR_MISC_FEATURES_ENABLES */
     struct {
         bool available; /* This MSR is non-architectural */
         bool cpuid_faulting;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-06-26 13:18 ` [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants Andrew Cooper
@ 2018-06-26 13:18 ` Andrew Cooper
  2018-06-26 17:59   ` Andrew Cooper
                     ` (3 more replies)
  2018-06-26 13:18 ` [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants Andrew Cooper
                   ` (2 subsequent siblings)
  6 siblings, 4 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

The existing bit names are excessively long (45 chars!), and can be trimmed
down substantially.  Drop the IA32 prefix and abbreviate FEATURE_CONTROL to
FEAT_CTL.  Furthermore, all of these are feature enablement bits, so drop
ENABLE/ON parts of the constants.

While altering all the users, take the opportunity to introduce cpu_has_smx
and clean up _vmx_cpu_up() with respect to its MSR_FEATURE_CONTROL handling.

The SENTER constants are unreferenced and dropped.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

An item on my TODO list for later is to fix the handling of FEAT_CTL_LOCK.  We
now have two places which look for BIOS misconfiguration, and fix it.
Whichever gets called first (LMCE or VMX) will lock the other feature out.
---
 xen/arch/x86/cpu/mcheck/mce_intel.c |  7 +++----
 xen/arch/x86/cpu/mcheck/vmce.c      |  2 +-
 xen/arch/x86/cpu/mwait-idle.c       |  4 ++--
 xen/arch/x86/hvm/vmx/vmcs.c         | 31 ++++++++++++++-----------------
 xen/arch/x86/hvm/vmx/vmx.c          | 13 ++++++-------
 xen/include/asm-x86/cpufeature.h    |  1 +
 xen/include/asm-x86/msr-index.h     | 16 +++++++---------
 7 files changed, 34 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index e5dd956..2d285d0 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -727,15 +727,14 @@ static bool intel_enable_lmce(void)
     /*
      * Section "Enabling Local Machine Check" in Intel SDM Vol 3
      * requires software must ensure the LOCK bit and LMCE_ON bit
-     * of MSR_IA32_FEATURE_CONTROL are set before setting
+     * of MSR_FEATURE_CONTROL are set before setting
      * MSR_IA32_MCG_EXT_CTL.LMCE_EN.
      */
 
-    if ( rdmsr_safe(MSR_IA32_FEATURE_CONTROL, msr_content) )
+    if ( rdmsr_safe(MSR_FEATURE_CONTROL, msr_content) )
         return false;
 
-    if ( (msr_content & IA32_FEATURE_CONTROL_LOCK) &&
-         (msr_content & IA32_FEATURE_CONTROL_LMCE_ON) )
+    if ( (msr_content & FEAT_CTL_LOCK) && (msr_content & FEAT_CTL_LMCE) )
     {
         wrmsrl(MSR_IA32_MCG_EXT_CTL, MCG_EXT_CTL_LMCE_EN);
         return true;
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..87f299b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -204,7 +204,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
     case MSR_IA32_MCG_EXT_CTL:
         /*
          * If MCG_LMCE_P is present in guest MSR_IA32_MCG_CAP, the LMCE and LOCK
-         * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
+         * bits are always set in guest MSR_FEATURE_CONTROL by Xen, so it
          * does not need to check them here.
          */
         if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 77fc3dd..bb96b32 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -1078,10 +1078,10 @@ static void __init sklh_idle_state_table_update(void)
 
 	/* if SGX is present */
 	if (boot_cpu_has(X86_FEATURE_SGX)) {
-		rdmsrl(MSR_IA32_FEATURE_CONTROL, msr);
+		rdmsrl(MSR_FEATURE_CONTROL, msr);
 
 		/* if SGX is enabled */
-		if (msr & IA32_FEATURE_CONTROL_SGX_ENABLE)
+		if (msr & FEAT_CTL_SGX)
 			return;
 	}
 
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 884c672..95a0e37 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -610,9 +610,9 @@ void vmx_cpu_dead(unsigned int cpu)
 
 int _vmx_cpu_up(bool bsp)
 {
-    u32 eax, edx;
-    int rc, bios_locked, cpu = smp_processor_id();
-    u64 cr0, vmx_cr0_fixed0, vmx_cr0_fixed1;
+    int rc, cpu = smp_processor_id();
+    uint64_t cr0, vmx_cr0_fixed0, vmx_cr0_fixed1, feat_ctl;
+    bool bios_locked;
 
     BUG_ON(!(read_cr4() & X86_CR4_VMXE));
 
@@ -630,14 +630,14 @@ int _vmx_cpu_up(bool bsp)
         return -EINVAL;
     }
 
-    rdmsr(MSR_IA32_FEATURE_CONTROL, eax, edx);
+    rdmsrl(MSR_FEATURE_CONTROL, feat_ctl);
 
-    bios_locked = !!(eax & IA32_FEATURE_CONTROL_LOCK);
+    bios_locked = feat_ctl & FEAT_CTL_LOCK;
     if ( bios_locked )
     {
-        if ( !(eax & (tboot_in_measured_env()
-                      ? IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX
-                      : IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX)) )
+        if ( !(feat_ctl & (tboot_in_measured_env()
+                           ? FEAT_CTL_VMX_INSIDE_SMX
+                           : FEAT_CTL_VMX_OUTSIDE_SMX)) )
         {
             printk("CPU%d: VMX disabled by BIOS.\n", cpu);
             return -EINVAL;
@@ -645,11 +645,9 @@ int _vmx_cpu_up(bool bsp)
     }
     else
     {
-        eax  = IA32_FEATURE_CONTROL_LOCK;
-        eax |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
-        if ( test_bit(X86_FEATURE_SMX, &boot_cpu_data.x86_capability) )
-            eax |= IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX;
-        wrmsr(MSR_IA32_FEATURE_CONTROL, eax, 0);
+        feat_ctl = (FEAT_CTL_LOCK | FEAT_CTL_VMX_OUTSIDE_SMX |
+                    (cpu_has_smx ? FEAT_CTL_VMX_INSIDE_SMX : 0));
+        wrmsrl(MSR_FEATURE_CONTROL, feat_ctl);
     }
 
     if ( (rc = vmx_init_vmcs_config()) != 0 )
@@ -663,10 +661,9 @@ int _vmx_cpu_up(bool bsp)
     switch ( __vmxon(this_cpu(vmxon_region)) )
     {
     case -2: /* #UD or #GP */
-        if ( bios_locked &&
-             test_bit(X86_FEATURE_SMX, &boot_cpu_data.x86_capability) &&
-             (!(eax & IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX) ||
-              !(eax & IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX)) )
+        if ( bios_locked && cpu_has_smx &&
+             (!(feat_ctl & FEAT_CTL_VMX_OUTSIDE_SMX) ||
+              !(feat_ctl & FEAT_CTL_VMX_INSIDE_SMX)) )
         {
             printk("CPU%d: VMXON failed: perhaps because of TXT settings "
                    "in your BIOS configuration?\n", cpu);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4318b15..d5334c9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2861,12 +2861,11 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_IA32_DEBUGCTLMSR:
         __vmread(GUEST_IA32_DEBUGCTL, msr_content);
         break;
-    case MSR_IA32_FEATURE_CONTROL:
-        *msr_content = IA32_FEATURE_CONTROL_LOCK;
-        if ( vmce_has_lmce(curr) )
-            *msr_content |= IA32_FEATURE_CONTROL_LMCE_ON;
-        if ( nestedhvm_enabled(curr->domain) )
-            *msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
+    case MSR_FEATURE_CONTROL:
+        *msr_content = (FEAT_CTL_LOCK |
+                        (nestedhvm_enabled(curr->domain)
+                         ? FEAT_CTL_VMX_OUTSIDE_SMX : 0) |
+                        (vmce_has_lmce(curr) ? FEAT_CTL_LMCE : 0));
         break;
     case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
         if ( !nvmx_msr_read_intercept(msr, msr_content) )
@@ -3123,7 +3122,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 
         break;
     }
-    case MSR_IA32_FEATURE_CONTROL:
+    case MSR_FEATURE_CONTROL:
     case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
         /* None of these MSRs are writeable. */
         goto gp_fault;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 4bc6c91..94f9c9e 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -47,6 +47,7 @@
 #define cpu_has_pclmulqdq       boot_cpu_has(X86_FEATURE_PCLMULQDQ)
 #define cpu_has_monitor         boot_cpu_has(X86_FEATURE_MONITOR)
 #define cpu_has_vmx             boot_cpu_has(X86_FEATURE_VMX)
+#define cpu_has_smx             boot_cpu_has(X86_FEATURE_SMX)
 #define cpu_has_eist            boot_cpu_has(X86_FEATURE_EIST)
 #define cpu_has_ssse3           boot_cpu_has(X86_FEATURE_SSSE3)
 #define cpu_has_fma             boot_cpu_has(X86_FEATURE_FMA)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 48d80e9..b9072b6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -15,6 +15,13 @@
  * abbreviated name.
  */
 
+#define MSR_FEATURE_CONTROL             0x0000003a
+#define FEAT_CTL_LOCK                   (_AC(1, ULL) <<  0)
+#define FEAT_CTL_VMX_INSIDE_SMX         (_AC(1, ULL) <<  1)
+#define FEAT_CTL_VMX_OUTSIDE_SMX        (_AC(1, ULL) <<  2)
+#define FEAT_CTL_SGX                    (_AC(1, ULL) << 18)
+#define FEAT_CTL_LMCE                   (_AC(1, ULL) << 20)
+
 /* Speculation Controls. */
 #define MSR_SPEC_CTRL                   0x00000048
 #define SPEC_CTRL_IBRS                  (_AC(1, ULL) <<  0)
@@ -321,15 +328,6 @@
 #define MSR_IA32_EBL_CR_POWERON		0x0000002a
 #define MSR_IA32_EBC_FREQUENCY_ID	0x0000002c
 
-#define MSR_IA32_FEATURE_CONTROL	0x0000003a
-#define IA32_FEATURE_CONTROL_LOCK                     0x0001
-#define IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX  0x0002
-#define IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX 0x0004
-#define IA32_FEATURE_CONTROL_SENTER_PARAM_CTL         0x7f00
-#define IA32_FEATURE_CONTROL_ENABLE_SENTER            0x8000
-#define IA32_FEATURE_CONTROL_SGX_ENABLE               0x40000
-#define IA32_FEATURE_CONTROL_LMCE_ON                  0x100000
-
 #define MSR_IA32_TSC_ADJUST		0x0000003b
 
 #define MSR_IA32_APICBASE		0x0000001b
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
@ 2018-06-26 13:18 ` Andrew Cooper
  2018-06-27 13:26   ` Wei Liu
  2018-06-27 13:32   ` Roger Pau Monné
  2018-06-26 13:18 ` [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants Andrew Cooper
  2018-06-26 18:22 ` [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers Andrew Cooper
  6 siblings, 2 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We currently have MSR_IA32_APICBASE and MSR_IA32_APICBASE_MSR which are
synonymous from a naming point of view, but refer to very different
things.

Cleave out the handling of MSR_APIC_BASE (0x1b), and rename
MSR_IA32_APICBASE_BASE to APIC_BASE_ADDR_MASK to better describe its
purpose.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/apic.c              | 66 ++++++++++++++++++++--------------------
 xen/arch/x86/genapic/x2apic.c    |  4 +--
 xen/arch/x86/hvm/hvm.c           |  4 +--
 xen/arch/x86/hvm/vlapic.c        | 15 +++++----
 xen/include/asm-x86/hvm/vlapic.h |  6 ++--
 xen/include/asm-x86/msr-index.h  | 11 ++++---
 6 files changed, 53 insertions(+), 53 deletions(-)

diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index ffa5a69..aa677e0 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -302,31 +302,31 @@ void disable_local_APIC(void)
 
     if (enabled_via_apicbase) {
         uint64_t msr_content;
-        rdmsrl(MSR_IA32_APICBASE, msr_content);
-        wrmsrl(MSR_IA32_APICBASE, msr_content &
-               ~(MSR_IA32_APICBASE_ENABLE|MSR_IA32_APICBASE_EXTD));
+        rdmsrl(MSR_APIC_BASE, msr_content);
+        wrmsrl(MSR_APIC_BASE, msr_content &
+               ~(APIC_BASE_ENABLE | APIC_BASE_EXTD));
     }
 
     if ( kexecing && (current_local_apic_mode() != apic_boot_mode) )
     {
         uint64_t msr_content;
-        rdmsrl(MSR_IA32_APICBASE, msr_content);
-        msr_content &= ~(MSR_IA32_APICBASE_ENABLE|MSR_IA32_APICBASE_EXTD);
-        wrmsrl(MSR_IA32_APICBASE, msr_content);
+        rdmsrl(MSR_APIC_BASE, msr_content);
+        msr_content &= ~(APIC_BASE_ENABLE | APIC_BASE_EXTD);
+        wrmsrl(MSR_APIC_BASE, msr_content);
 
         switch ( apic_boot_mode )
         {
         case APIC_MODE_DISABLED:
             break; /* Nothing to do - we did this above */
         case APIC_MODE_XAPIC:
-            msr_content |= MSR_IA32_APICBASE_ENABLE;
-            wrmsrl(MSR_IA32_APICBASE, msr_content);
+            msr_content |= APIC_BASE_ENABLE;
+            wrmsrl(MSR_APIC_BASE, msr_content);
             break;
         case APIC_MODE_X2APIC:
-            msr_content |= MSR_IA32_APICBASE_ENABLE;
-            wrmsrl(MSR_IA32_APICBASE, msr_content);
-            msr_content |= MSR_IA32_APICBASE_EXTD;
-            wrmsrl(MSR_IA32_APICBASE, msr_content);
+            msr_content |= APIC_BASE_ENABLE;
+            wrmsrl(MSR_APIC_BASE, msr_content);
+            msr_content |= APIC_BASE_EXTD;
+            wrmsrl(MSR_APIC_BASE, msr_content);
             break;
         default:
             printk("Default case when reverting #%d lapic to boot state\n",
@@ -478,12 +478,12 @@ static void __enable_x2apic(void)
 {
     uint64_t msr_content;
 
-    rdmsrl(MSR_IA32_APICBASE, msr_content);
-    if ( !(msr_content & MSR_IA32_APICBASE_EXTD) )
+    rdmsrl(MSR_APIC_BASE, msr_content);
+    if ( !(msr_content & APIC_BASE_EXTD) )
     {
-        msr_content |= MSR_IA32_APICBASE_ENABLE | MSR_IA32_APICBASE_EXTD;
+        msr_content |= APIC_BASE_ENABLE | APIC_BASE_EXTD;
         msr_content = (uint32_t)msr_content;
-        wrmsrl(MSR_IA32_APICBASE, msr_content);
+        wrmsrl(MSR_APIC_BASE, msr_content);
     }
 }
 
@@ -743,10 +743,10 @@ int lapic_resume(void)
      */
     if ( !x2apic_enabled )
     {
-        rdmsrl(MSR_IA32_APICBASE, msr_content);
-        msr_content &= ~MSR_IA32_APICBASE_BASE;
-        wrmsrl(MSR_IA32_APICBASE,
-            msr_content | MSR_IA32_APICBASE_ENABLE | mp_lapic_addr);
+        rdmsrl(MSR_APIC_BASE, msr_content);
+        msr_content &= ~APIC_BASE_ADDR_MASK;
+        wrmsrl(MSR_APIC_BASE,
+               msr_content | APIC_BASE_ENABLE | mp_lapic_addr);
     }
     else
         resume_x2apic();
@@ -817,7 +817,8 @@ static int __init detect_init_APIC (void)
     if (enable_local_apic < 0)
         return -1;
 
-    if (rdmsr_safe(MSR_IA32_APICBASE, msr_content)) {
+    if ( rdmsr_safe(MSR_APIC_BASE, msr_content) )
+    {
         printk("No local APIC present\n");
         return -1;
     }
@@ -838,11 +839,12 @@ static int __init detect_init_APIC (void)
          * software for Intel P6 or later and AMD K7
          * (Model > 1) or later.
          */
-        if (!(msr_content & MSR_IA32_APICBASE_ENABLE)) {
+        if ( !(msr_content & APIC_BASE_ENABLE) )
+        {
             printk("Local APIC disabled by BIOS -- reenabling.\n");
-            msr_content &= ~MSR_IA32_APICBASE_BASE;
-            msr_content |= MSR_IA32_APICBASE_ENABLE | APIC_DEFAULT_PHYS_BASE;
-            wrmsrl(MSR_IA32_APICBASE, msr_content);
+            msr_content &= ~APIC_BASE_ADDR_MASK;
+            msr_content |= APIC_BASE_ENABLE | APIC_DEFAULT_PHYS_BASE;
+            wrmsrl(MSR_APIC_BASE, msr_content);
             enabled_via_apicbase = true;
         }
     }
@@ -859,8 +861,8 @@ static int __init detect_init_APIC (void)
     mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
 
     /* The BIOS may have set up the APIC at some other address */
-    if (msr_content & MSR_IA32_APICBASE_ENABLE)
-        mp_lapic_addr = msr_content & MSR_IA32_APICBASE_BASE;
+    if ( msr_content & APIC_BASE_ENABLE )
+        mp_lapic_addr = msr_content & APIC_BASE_ADDR_MASK;
 
     if (nmi_watchdog != NMI_NONE)
         nmi_watchdog = NMI_LOCAL_APIC;
@@ -1446,23 +1448,21 @@ void __init record_boot_APIC_mode(void)
                 apic_mode_to_str(apic_boot_mode));
 }
 
-/* Look at the bits in MSR_IA32_APICBASE and work out which
- * APIC mode we are in */
+/* Look at the bits in MSR_APIC_BASE and work out which APIC mode we are in */
 enum apic_mode current_local_apic_mode(void)
 {
     u64 msr_contents;
 
-    rdmsrl(MSR_IA32_APICBASE, msr_contents);
+    rdmsrl(MSR_APIC_BASE, msr_contents);
 
     /* Reading EXTD bit from the MSR is only valid if CPUID
      * says so, else reserved */
-    if ( boot_cpu_has(X86_FEATURE_X2APIC)
-         && (msr_contents & MSR_IA32_APICBASE_EXTD) )
+    if ( boot_cpu_has(X86_FEATURE_X2APIC) && (msr_contents & APIC_BASE_EXTD) )
         return APIC_MODE_X2APIC;
 
     /* EN bit should always be valid as long as we can read the MSR
      */
-    if ( msr_contents & MSR_IA32_APICBASE_ENABLE )
+    if ( msr_contents & APIC_BASE_ENABLE )
         return APIC_MODE_XAPIC;
 
     return APIC_MODE_DISABLED;
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index 4779b0d..1cf67ea 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -253,8 +253,8 @@ void __init check_x2apic_preenabled(void)
         return;
 
     /* Check whether x2apic mode was already enabled by the BIOS. */
-    rdmsr(MSR_IA32_APICBASE, lo, hi);
-    if ( lo & MSR_IA32_APICBASE_EXTD )
+    rdmsr(MSR_APIC_BASE, lo, hi);
+    if ( lo & APIC_BASE_EXTD )
     {
         printk("x2APIC mode is already enabled by BIOS.\n");
         x2apic_enabled = 1;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5823620..9fbddfa 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3443,7 +3443,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = hvm_msr_tsc_aux(v);
         break;
 
-    case MSR_IA32_APICBASE:
+    case MSR_APIC_BASE:
         *msr_content = vcpu_vlapic(v)->hw.apic_base_msr;
         break;
 
@@ -3597,7 +3597,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
             wrmsr_tsc_aux(msr_content);
         break;
 
-    case MSR_IA32_APICBASE:
+    case MSR_APIC_BASE:
         if ( !vlapic_msr_set(vcpu_vlapic(v), msr_content) )
             goto gp_fault;
         break;
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..aa25967 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1095,11 +1095,11 @@ bool_t vlapic_msr_set(struct vlapic *vlapic, uint64_t value)
     if ( !has_vlapic(vlapic_domain(vlapic)) )
         return 0;
 
-    if ( (vlapic->hw.apic_base_msr ^ value) & MSR_IA32_APICBASE_ENABLE )
+    if ( (vlapic->hw.apic_base_msr ^ value) & APIC_BASE_ENABLE )
     {
-        if ( unlikely(value & MSR_IA32_APICBASE_EXTD) )
+        if ( unlikely(value & APIC_BASE_EXTD) )
             return 0;
-        if ( value & MSR_IA32_APICBASE_ENABLE )
+        if ( value & APIC_BASE_ENABLE )
         {
             vlapic_reset(vlapic);
             vlapic->hw.disabled &= ~VLAPIC_HW_DISABLED;
@@ -1111,7 +1111,7 @@ bool_t vlapic_msr_set(struct vlapic *vlapic, uint64_t value)
             pt_may_unmask_irq(vlapic_domain(vlapic), NULL);
         }
     }
-    else if ( ((vlapic->hw.apic_base_msr ^ value) & MSR_IA32_APICBASE_EXTD) &&
+    else if ( ((vlapic->hw.apic_base_msr ^ value) & APIC_BASE_EXTD) &&
               unlikely(!vlapic_xapic_mode(vlapic)) )
         return 0;
 
@@ -1396,10 +1396,9 @@ void vlapic_reset(struct vlapic *vlapic)
     if ( !has_vlapic(v->domain) )
         return;
 
-    vlapic->hw.apic_base_msr = (MSR_IA32_APICBASE_ENABLE |
-                                APIC_DEFAULT_PHYS_BASE);
+    vlapic->hw.apic_base_msr = APIC_BASE_ENABLE | APIC_DEFAULT_PHYS_BASE;
     if ( v->vcpu_id == 0 )
-        vlapic->hw.apic_base_msr |= MSR_IA32_APICBASE_BSP;
+        vlapic->hw.apic_base_msr |= APIC_BASE_BSP;
 
     vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24);
     vlapic_do_init(vlapic);
@@ -1531,7 +1530,7 @@ static int lapic_load_hidden(struct domain *d, hvm_domain_context_t *h)
     if ( s->loaded.regs )
         lapic_load_fixup(s);
 
-    if ( !(s->hw.apic_base_msr & MSR_IA32_APICBASE_ENABLE) &&
+    if ( !(s->hw.apic_base_msr & APIC_BASE_ENABLE) &&
          unlikely(vlapic_x2apic_mode(s)) )
         return -EINVAL;
 
diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
index 212c36b..0028fa2 100644
--- a/xen/include/asm-x86/hvm/vlapic.h
+++ b/xen/include/asm-x86/hvm/vlapic.h
@@ -50,13 +50,13 @@
 #define vlapic_enabled(vlapic)     (!vlapic_disabled(vlapic))
 
 #define vlapic_base_address(vlapic)                             \
-    ((vlapic)->hw.apic_base_msr & MSR_IA32_APICBASE_BASE)
+    ((vlapic)->hw.apic_base_msr & APIC_BASE_ADDR_MASK)
 /* Only check EXTD bit as EXTD can't be set if it is disabled by hardware */
 #define vlapic_x2apic_mode(vlapic)                              \
-    ((vlapic)->hw.apic_base_msr & MSR_IA32_APICBASE_EXTD)
+    ((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD)
 #define vlapic_xapic_mode(vlapic)                               \
     (!vlapic_hw_disabled(vlapic) && \
-     !((vlapic)->hw.apic_base_msr & MSR_IA32_APICBASE_EXTD))
+     !((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD))
 
 /*
  * Generic APIC bitmap vector update & search routines.
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index b9072b6..ce2e847 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -15,6 +15,12 @@
  * abbreviated name.
  */
 
+#define MSR_APIC_BASE                   0x0000001b
+#define APIC_BASE_BSP                   (_AC(1, ULL) <<  8)
+#define APIC_BASE_EXTD                  (_AC(1, ULL) << 10)
+#define APIC_BASE_ENABLE                (_AC(1, ULL) << 11)
+#define APIC_BASE_ADDR_MASK             0x000ffffffffff000ULL
+
 #define MSR_FEATURE_CONTROL             0x0000003a
 #define FEAT_CTL_LOCK                   (_AC(1, ULL) <<  0)
 #define FEAT_CTL_VMX_INSIDE_SMX         (_AC(1, ULL) <<  1)
@@ -330,11 +336,6 @@
 
 #define MSR_IA32_TSC_ADJUST		0x0000003b
 
-#define MSR_IA32_APICBASE		0x0000001b
-#define MSR_IA32_APICBASE_BSP		(1<<8)
-#define MSR_IA32_APICBASE_EXTD		(1<<10)
-#define MSR_IA32_APICBASE_ENABLE	(1<<11)
-#define MSR_IA32_APICBASE_BASE		0x000ffffffffff000ul
 #define MSR_IA32_APICBASE_MSR           0x800
 #define MSR_IA32_APICTPR_MSR            0x808
 #define MSR_IA32_APICPPR_MSR            0x80a
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-06-26 13:18 ` [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants Andrew Cooper
@ 2018-06-26 13:18 ` Andrew Cooper
  2018-06-27 13:26   ` Wei Liu
                     ` (2 more replies)
  2018-06-26 18:22 ` [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers Andrew Cooper
  6 siblings, 3 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 13:18 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The name MSR_IA32_APICBASE_MSR doesn't logically relate to its purpose.
Rename it to MSR_X2APIC_FIRST and introduce a corresponding
MSR_X2APIC_LAST to avoid opencoding the length of the x2APIC MSR range.

For the specific registers, drop the IA32 infix, break the APIC part
away from the register name, and drop the MSR suffix.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/hvm.c          |  4 ++--
 xen/arch/x86/hvm/vlapic.c       |  4 ++--
 xen/arch/x86/hvm/vmx/vmx.c      | 20 ++++++++++----------
 xen/include/asm-x86/msr-index.h | 18 ++++++++++--------
 4 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9fbddfa..1f5f144 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3447,7 +3447,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = vcpu_vlapic(v)->hw.apic_base_msr;
         break;
 
-    case MSR_IA32_APICBASE_MSR ... MSR_IA32_APICBASE_MSR + 0x3ff:
+    case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( hvm_x2apic_msr_read(v, msr, msr_content) )
             goto gp_fault;
         break;
@@ -3606,7 +3606,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         vlapic_tdt_msr_set(vcpu_vlapic(v), msr_content);
         break;
 
-    case MSR_IA32_APICBASE_MSR ... MSR_IA32_APICBASE_MSR + 0x3ff:
+    case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( hvm_x2apic_msr_write(v, msr, msr_content) )
             goto gp_fault;
         break;
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index aa25967..96efc0f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -684,7 +684,7 @@ int hvm_x2apic_msr_read(struct vcpu *v, unsigned int msr, uint64_t *msr_content)
 #undef REGBLOCK
         };
     const struct vlapic *vlapic = vcpu_vlapic(v);
-    uint32_t high = 0, reg = msr - MSR_IA32_APICBASE_MSR, offset = reg << 4;
+    uint32_t high = 0, reg = msr - MSR_X2APIC_FIRST, offset = reg << 4;
 
     if ( !vlapic_x2apic_mode(vlapic) ||
          (reg >= sizeof(readable) * 8) || !test_bit(reg, readable) )
@@ -988,7 +988,7 @@ int vlapic_apicv_write(struct vcpu *v, unsigned int offset)
 int hvm_x2apic_msr_write(struct vcpu *v, unsigned int msr, uint64_t msr_content)
 {
     struct vlapic *vlapic = vcpu_vlapic(v);
-    uint32_t offset = (msr - MSR_IA32_APICBASE_MSR) << 4;
+    uint32_t offset = (msr - MSR_X2APIC_FIRST) << 4;
 
     if ( !vlapic_x2apic_mode(vlapic) )
         return X86EMUL_UNHANDLEABLE;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d5334c9..48e2f8c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2995,19 +2995,19 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
                 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
             if ( cpu_has_vmx_apic_reg_virt )
             {
-                for ( msr = MSR_IA32_APICBASE_MSR;
-                      msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
+                for ( msr = MSR_X2APIC_FIRST;
+                      msr <= MSR_X2APIC_FIRST + 0xff; msr++ )
                     vmx_clear_msr_intercept(v, msr, VMX_MSR_R);
 
-                vmx_set_msr_intercept(v, MSR_IA32_APICPPR_MSR, VMX_MSR_R);
-                vmx_set_msr_intercept(v, MSR_IA32_APICTMICT_MSR, VMX_MSR_R);
-                vmx_set_msr_intercept(v, MSR_IA32_APICTMCCT_MSR, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R);
             }
             if ( cpu_has_vmx_virtual_intr_delivery )
             {
-                vmx_clear_msr_intercept(v, MSR_IA32_APICTPR_MSR, VMX_MSR_W);
-                vmx_clear_msr_intercept(v, MSR_IA32_APICEOI_MSR, VMX_MSR_W);
-                vmx_clear_msr_intercept(v, MSR_IA32_APICSELF_MSR, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W);
             }
         }
         else
@@ -3016,8 +3016,8 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
     }
     if ( !(v->arch.hvm_vmx.secondary_exec_control &
            SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
-        for ( msr = MSR_IA32_APICBASE_MSR;
-              msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
+        for ( msr = MSR_X2APIC_FIRST;
+              msr <= MSR_X2APIC_FIRST + 0xff; msr++ )
             vmx_set_msr_intercept(v, msr, VMX_MSR_RW);
 
     vmx_update_secondary_exec_control(v);
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index ce2e847..9d96e96 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -49,6 +49,16 @@
 #define MSR_MISC_FEATURES_ENABLES       0x00000140
 #define MISC_FEATURES_CPUID_FAULTING    (_AC(1, ULL) <<  0)
 
+#define MSR_X2APIC_FIRST                0x00000800
+#define MSR_X2APIC_LAST                 0x00000bff
+
+#define MSR_X2APIC_TPR                  0x00000808
+#define MSR_X2APIC_PPR                  0x0000080a
+#define MSR_X2APIC_EOI                  0x0000080b
+#define MSR_X2APIC_TMICT                0x00000838
+#define MSR_X2APIC_TMCCT                0x00000839
+#define MSR_X2APIC_SELF                 0x0000083f
+
 #define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
 #define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
 #define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
@@ -336,14 +346,6 @@
 
 #define MSR_IA32_TSC_ADJUST		0x0000003b
 
-#define MSR_IA32_APICBASE_MSR           0x800
-#define MSR_IA32_APICTPR_MSR            0x808
-#define MSR_IA32_APICPPR_MSR            0x80a
-#define MSR_IA32_APICEOI_MSR            0x80b
-#define MSR_IA32_APICTMICT_MSR          0x838
-#define MSR_IA32_APICTMCCT_MSR          0x839
-#define MSR_IA32_APICSELF_MSR           0x83f
-
 #define MSR_IA32_UCODE_WRITE		0x00000079
 #define MSR_IA32_UCODE_REV		0x0000008b
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
@ 2018-06-26 15:33   ` Wei Liu
  2018-06-27 10:39   ` Roger Pau Monné
  2018-06-28 13:00   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-26 15:33 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:13PM +0100, Andrew Cooper wrote:
> The bit position constants are only used by the trampoline asm, but the
> code is shorter and clearer when using the mask constants.  This halves
> the number of constants used.
> 
> Consistently use _AC() for the bit constants, and start to use spaces
> for indentation.  Furthermore, EFER contains the NX-Enable bit, to
> rename the constant to EFER_NXE to match both the AMD and Intel specs.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/6] x86/msr: Cleanup of misc constants
  2018-06-26 13:18 ` [PATCH 2/6] x86/msr: Cleanup of misc constants Andrew Cooper
@ 2018-06-26 15:43   ` Wei Liu
  2018-06-27 10:48   ` Roger Pau Monné
  1 sibling, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-26 15:43 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:14PM +0100, Andrew Cooper wrote:
> Begin the process of cleaning up msr-index.h.  Order the MSRs at the
> head of the file by index, use spaces for indentation, _AC() for bit
> positions, and add a comment describing the expected style.  Abbreviate
> the ARCH_CAPS_* constants to reduce code volume.
> 
> Leave a trailing comment to logically split the file in two, between the
> now-consistent head, and legacy tail which will be improved moving
> forwards.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants
  2018-06-26 13:18 ` [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants Andrew Cooper
@ 2018-06-26 16:31   ` Wei Liu
  2018-06-27 11:08   ` Roger Pau Monné
  2018-06-28 13:04   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-26 16:31 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:15PM +0100, Andrew Cooper wrote:
> These MSRs, while being Intel specific, are used to offer virtualised
> CPUID faulting support on AMD hardware, so remove the INTEL infix.
> 
> The bit position constants are used by guest_rdmsr(), but the logic can
> be expressed using MASK_INSR() which allows the removal of the bit
> position constants.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
@ 2018-06-26 17:59   ` Andrew Cooper
  2018-06-27  9:05     ` Jan Beulich
  2018-06-27 11:08   ` Wei Liu
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 17:59 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Roger Pau Monné

On 26/06/18 14:18, Andrew Cooper wrote:
>  xen/arch/x86/cpu/mwait-idle.c       |  4 ++--

I forgot to say that this patch as shown may impact the ease of taking
new code from Linux.

While we don't want to proliferate the broken APIs of the current
rdmsr() infrastructure, one option we could do is to introduce a
<xen/linux-compat.h> file for including into code from Linux, to ease
the differences.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers
  2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
                   ` (5 preceding siblings ...)
  2018-06-26 13:18 ` [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants Andrew Cooper
@ 2018-06-26 18:22 ` Andrew Cooper
  2018-06-27 13:35   ` Wei Liu
                     ` (2 more replies)
  6 siblings, 3 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-26 18:22 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

One reoccuring code pattern is to read an MSR, modify one or more bits,
and write the result back.  Introduce helpers for this purpose.

First, introduce rdmsr_split() and wrmsr_split() which are tiny static inline
wrappers which deal with the MSR value in two 32bit halves.

Next, construct msr_{set,clear}_bits() in terms of the {rdmsr,wrmsr}_split().
The mask operations are deliberately performed as 32bit operations, because
all callers pass in a constant to the mask parameter, and in all current
cases, one of the two operations can be elided.

For MSR_IA32_PSR_L3_QOS_CFG, switch PSR_L3_QOS_CDP_ENABLE from being a bit
position variable to being a plain number.

The resulting C is shorter, and doesn't require a temporary variable.  The
generated ASM is also more efficient, because of avoiding the
packing/unpacking operations.  e.g. the delta in the first hunk is from:

  b9 1b 00 00 00          mov    $0x1b,%ecx
  0f 32                   rdmsr
  48 c1 e2 20             shl    $0x20,%rdx
  48 09 d0                or     %rdx,%rax
  80 e4 f3                and    $0xf3,%ah
  48 89 c2                mov    %rax,%rdx
  48 c1 ea 20             shr    $0x20,%rdx
  0f 30                   wrmsr

to:

  b9 1b 00 00 00          mov    $0x1b,%ecx
  0f 32                   rdmsr
  80 e4 f3                and    $0xf3,%ah
  0f 30                   wrmsr

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/apic.c                 |  8 ++------
 xen/arch/x86/cpu/amd.c              | 19 ++++++-------------
 xen/arch/x86/cpu/centaur.c          |  8 ++------
 xen/arch/x86/cpu/mcheck/mce_intel.c |  7 ++-----
 xen/arch/x86/efi/efi-boot.h         | 11 ++++-------
 xen/arch/x86/hvm/svm/svm.c          |  3 +--
 xen/arch/x86/psr.c                  |  9 +--------
 xen/include/asm-x86/msr.h           | 34 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/psr.h           |  2 +-
 9 files changed, 53 insertions(+), 48 deletions(-)

diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index aa677e0..acd8a8a 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -300,12 +300,8 @@ void disable_local_APIC(void)
      */
     apic_write(APIC_SPIV, apic_read(APIC_SPIV) & ~APIC_SPIV_APIC_ENABLED);
 
-    if (enabled_via_apicbase) {
-        uint64_t msr_content;
-        rdmsrl(MSR_APIC_BASE, msr_content);
-        wrmsrl(MSR_APIC_BASE, msr_content &
-               ~(APIC_BASE_ENABLE | APIC_BASE_EXTD));
-    }
+    if ( enabled_via_apicbase )
+        msr_clear_bits(MSR_APIC_BASE, APIC_BASE_ENABLE | APIC_BASE_EXTD);
 
     if ( kexecing && (current_local_apic_mode() != apic_boot_mode) )
     {
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 458a3fe..e6970ad 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -538,11 +538,8 @@ static void init_amd(struct cpuinfo_x86 *c)
 	 * Errata 63 for SH-B3 steppings
 	 * Errata 122 for all steppings (F+ have it disabled by default)
 	 */
-	if (c->x86 == 15) {
-		rdmsrl(MSR_K7_HWCR, value);
-		value |= 1 << 6;
-		wrmsrl(MSR_K7_HWCR, value);
-	}
+	if (c->x86 == 15)
+		msr_set_bits(MSR_K7_HWCR, 1 << 6);
 
 	/*
 	 * Some AMD CPUs duplicate the 3DNow bit in base and extended CPUID
@@ -748,9 +745,7 @@ static void init_amd(struct cpuinfo_x86 *c)
 		 * guests. Prevent this conversion by clearing bit 24 in
 		 * MSR_F10_BU_CFG2.
 		 */
-		rdmsrl(MSR_F10_BU_CFG2, value);
-		value &= ~(1ULL << 24);
-		wrmsrl(MSR_F10_BU_CFG2, value);
+		msr_clear_bits(MSR_F10_BU_CFG2, 1 << 24);
 	}
 
 	/*
@@ -771,11 +766,9 @@ static void init_amd(struct cpuinfo_x86 *c)
 		wrmsrl(MSR_K7_PERFCTR3, 0);
 	}
 
-	if (cpu_has(c, X86_FEATURE_EFRO)) {
-		rdmsr(MSR_K7_HWCR, l, h);
-		l |= (1 << 27); /* Enable read-only APERF/MPERF bit */
-		wrmsr(MSR_K7_HWCR, l, h);
-	}
+	if (cpu_has(c, X86_FEATURE_EFRO))
+		/* Enable read-only APERF/MPERF bit */
+		msr_set_bits(MSR_K7_HWCR, 1 << 27);
 
 	/* Prevent TSC drift in non single-processor, single-core platforms. */
 	if ((smp_processor_id() == 1) && !cpu_has(c, X86_FEATURE_ITSC))
diff --git a/xen/arch/x86/cpu/centaur.c b/xen/arch/x86/cpu/centaur.c
index 1c760be..dfdb5f4 100644
--- a/xen/arch/x86/cpu/centaur.c
+++ b/xen/arch/x86/cpu/centaur.c
@@ -16,25 +16,21 @@
 
 static void init_c3(struct cpuinfo_x86 *c)
 {
-	uint64_t msr_content;
-
 	/* Test for Centaur Extended Feature Flags presence */
 	if (cpuid_eax(0xC0000000) >= 0xC0000001) {
 		u32 tmp = cpuid_edx(0xC0000001);
 
 		/* enable ACE unit, if present and disabled */
 		if ((tmp & (ACE_PRESENT | ACE_ENABLED)) == ACE_PRESENT) {
-			rdmsrl(MSR_VIA_FCR, msr_content);
 			/* enable ACE unit */
-			wrmsrl(MSR_VIA_FCR, msr_content | ACE_FCR);
+			msr_set_bits(MSR_VIA_FCR, ACE_FCR);
 			printk(KERN_INFO "CPU: Enabled ACE h/w crypto\n");
 		}
 
 		/* enable RNG unit, if present and disabled */
 		if ((tmp & (RNG_PRESENT | RNG_ENABLED)) == RNG_PRESENT) {
-			rdmsrl(MSR_VIA_RNG, msr_content);
 			/* enable RNG unit */
-			wrmsrl(MSR_VIA_RNG, msr_content | RNG_ENABLE);
+			msr_set_bits(MSR_VIA_RNG, RNG_ENABLE);
 			printk(KERN_INFO "CPU: Enabled h/w RNG\n");
 		}
 	}
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index 2d285d0..c5f171d 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -164,11 +164,8 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
     val |= (APIC_DM_FIXED | APIC_LVT_MASKED);  /* we'll mask till we're ready */
     apic_write(APIC_LVTTHMR, val);
 
-    rdmsrl(MSR_IA32_THERM_INTERRUPT, msr_content);
-    wrmsrl(MSR_IA32_THERM_INTERRUPT, msr_content | 0x03);
-
-    rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-    wrmsrl(MSR_IA32_MISC_ENABLE, msr_content | (1ULL<<3));
+    msr_set_bits(MSR_IA32_THERM_INTERRUPT, 0x3);
+    msr_set_bits(MSR_IA32_MISC_ENABLE, 1 << 3);
 
     apic_write(APIC_LVTTHMR, val & ~APIC_LVT_MASKED);
     if ( opt_cpu_info )
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 09bb3f4..6619af9 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -229,18 +229,15 @@ static void __init efi_arch_pre_exit_boot(void)
 
 static void __init noreturn efi_arch_post_exit_boot(void)
 {
-    u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;
+    bool nx = cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX);
+    uint64_t cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, tmp;
 
     efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
     memcpy((void *)trampoline_phys, trampoline_start, cfg.size);
 
     /* Set system registers and transfer control. */
     asm volatile("pushq $0\n\tpopfq");
-    rdmsrl(MSR_EFER, efer);
-    efer |= EFER_SCE;
-    if ( cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX) )
-        efer |= EFER_NXE;
-    wrmsrl(MSR_EFER, efer);
+    msr_set_bits(MSR_EFER, EFER_SCE | (nx ? EFER_NXE : 0));
     write_cr0(X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP |
               X86_CR0_AM | X86_CR0_PG);
     asm volatile ( "mov    %[cr4], %%cr4\n\t"
@@ -260,7 +257,7 @@ static void __init noreturn efi_arch_post_exit_boot(void)
                    "movl   %[cs], 8(%%rsp)\n\t"
                    "mov    %[rip], (%%rsp)\n\t"
                    "lretq  %[stkoff]-16"
-                   : [rip] "=&r" (efer/* any dead 64-bit variable */),
+                   : [rip] "=&r" (tmp),
                      [cr4] "+&r" (cr4)
                    : [cr3] "r" (idle_pg_table),
                      [cs] "ir" (__HYPERVISOR_CS),
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 673a38c..00e3a97 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2484,8 +2484,7 @@ static int svm_is_erratum_383(struct cpu_user_regs *regs)
     for (i = 0; i < nr_mce_banks; i++)
         wrmsrl(MSR_IA32_MCx_STATUS(i), 0ULL);
     
-    rdmsrl(MSR_IA32_MCG_STATUS, msr_content);
-    wrmsrl(MSR_IA32_MCG_STATUS, msr_content & ~(1ULL << 2));
+    msr_clear_bits(MSR_IA32_MCG_STATUS, 1 << 2);
 
     /* flush TLB */
     flush_tlb_mask(v->domain->dirty_cpumask);
diff --git a/xen/arch/x86/psr.c b/xen/arch/x86/psr.c
index 0ba8ef8..c4faa3d 100644
--- a/xen/arch/x86/psr.c
+++ b/xen/arch/x86/psr.c
@@ -342,9 +342,6 @@ static bool cat_init_feature(const struct cpuid_leaf *regs,
         break;
 
     case FEAT_TYPE_L3_CDP:
-    {
-        uint64_t val;
-
         if ( feat->cos_max < 3 )
             return false;
 
@@ -357,12 +354,8 @@ static bool cat_init_feature(const struct cpuid_leaf *regs,
 
         wrmsrl(MSR_IA32_PSR_L3_MASK(0), cat_default_val(feat->cat.cbm_len));
         wrmsrl(MSR_IA32_PSR_L3_MASK(1), cat_default_val(feat->cat.cbm_len));
-        rdmsrl(MSR_IA32_PSR_L3_QOS_CFG, val);
-        wrmsrl(MSR_IA32_PSR_L3_QOS_CFG,
-               val | (1ull << PSR_L3_QOS_CDP_ENABLE_BIT));
-
+        msr_set_bits(MSR_IA32_PSR_L3_QOS_CFG, PSR_L3_QOS_CDP_ENABLE);
         break;
-    }
 
     default:
         return false;
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 8f9f964..cbf92f5 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -11,6 +11,11 @@
 #include <asm/asm_defns.h>
 #include <asm/cpufeature.h>
 
+static inline void rdmsr_split(unsigned int msr, uint32_t *lo, uint32_t *hi)
+{
+    asm volatile ( "rdmsr" : "=a" (*lo), "=d" (*hi) : "c" (msr) );
+}
+
 #define rdmsr(msr,val1,val2) \
      __asm__ __volatile__("rdmsr" \
 			  : "=a" (val1), "=d" (val2) \
@@ -23,6 +28,11 @@
        val = a__ | ((u64)b__<<32); \
 } while(0)
 
+static inline void wrmsr_split(unsigned int msr, uint32_t lo, uint32_t hi)
+{
+    asm volatile ( "wrmsr" :: "a" (lo), "d" (hi), "c" (msr) );
+}
+
 #define wrmsr(msr,val1,val2) \
      __asm__ __volatile__("wrmsr" \
 			  : /* no outputs */ \
@@ -82,6 +92,30 @@ static inline void msr_split(struct cpu_user_regs *regs, uint64_t val)
     regs->rax = (uint32_t)val;
 }
 
+static inline void msr_set_bits(unsigned int msr, uint64_t mask)
+{
+    uint32_t lo, hi;
+
+    rdmsr_split(msr, &lo, &hi);
+
+    lo |= mask;
+    hi |= (mask >> 32);
+
+    wrmsr_split(msr, lo, hi);
+}
+
+static inline void msr_clear_bits(unsigned int msr, uint64_t mask)
+{
+    uint32_t lo, hi;
+
+    rdmsr_split(msr, &lo, &hi);
+
+    lo &= ~mask;
+    hi &= ~(mask >> 32);
+
+    wrmsr_split(msr, lo, hi);
+}
+
 static inline uint64_t rdtsc(void)
 {
     uint32_t low, high;
diff --git a/xen/include/asm-x86/psr.h b/xen/include/asm-x86/psr.h
index c2257da..6caac67 100644
--- a/xen/include/asm-x86/psr.h
+++ b/xen/include/asm-x86/psr.h
@@ -33,7 +33,7 @@
 #define PSR_CAT_CDP_CAPABILITY          (1u << 2)
 
 /* L3 CDP Enable bit*/
-#define PSR_L3_QOS_CDP_ENABLE_BIT       0x0
+#define PSR_L3_QOS_CDP_ENABLE           (_AC(1, ULL) <<  0)
 
 /* Used by psr_get_info() */
 #define PSR_INFO_IDX_COS_MAX            0
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-26 17:59   ` Andrew Cooper
@ 2018-06-27  9:05     ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-06-27  9:05 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 26.06.18 at 19:59, <andrew.cooper3@citrix.com> wrote:
> On 26/06/18 14:18, Andrew Cooper wrote:
>>  xen/arch/x86/cpu/mwait-idle.c       |  4 ++--
> 
> I forgot to say that this patch as shown may impact the ease of taking
> new code from Linux.

The diffstat above doesn't suggest this is going to make things more
complicated. We're already far from being able to take Linux commits
verbatim.

> While we don't want to proliferate the broken APIs of the current
> rdmsr() infrastructure, one option we could do is to introduce a
> <xen/linux-compat.h> file for including into code from Linux, to ease
> the differences.

See the other (ARM related) thread where they also want to introduce
such a header: I'm not really fancying something like this becoming
globally visible.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
  2018-06-26 15:33   ` Wei Liu
@ 2018-06-27 10:39   ` Roger Pau Monné
  2018-06-27 10:44     ` Andrew Cooper
  2018-06-28 13:00   ` Jan Beulich
  2 siblings, 1 reply; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 10:39 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:13PM +0100, Andrew Cooper wrote:
> The bit position constants are only used by the trampoline asm, but the
> code is shorter and clearer when using the mask constants.  This halves
> the number of constants used.
> 
> Consistently use _AC() for the bit constants, and start to use spaces
> for indentation.  Furthermore, EFER contains the NX-Enable bit, to
> rename the constant to EFER_NXE to match both the AMD and Intel specs.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just a couple of comments.

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/boot/trampoline.S  |  2 +-
>  xen/arch/x86/boot/wakeup.S      |  7 +++----
>  xen/arch/x86/cpu/intel.c        |  2 +-
>  xen/arch/x86/efi/efi-boot.h     |  2 +-
>  xen/arch/x86/hvm/hvm.c          |  4 ++--
>  xen/include/asm-x86/hvm/hvm.h   |  2 +-
>  xen/include/asm-x86/msr-index.h | 33 ++++++++++++---------------------
>  7 files changed, 21 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
> index 5588c79..7e77e61e 100644
> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -138,7 +138,7 @@ trampoline_protmode_entry:
>          or      $EFER_LME|EFER_SCE,%eax   /* Long Mode + SYSCALL/SYSRET */
>          bt      $cpufeat_bit(X86_FEATURE_NX),%edi /* No Execute? */
>          jnc     1f
> -        btsl    $_EFER_NX,%eax  /* No Execute     */
> +        or      $EFER_NXE, %eax /* No Execute */
>  1:      wrmsr
>  
>          mov     $(X86_CR0_PG | X86_CR0_AM | X86_CR0_WP | X86_CR0_NE |\
> diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch/x86/boot/wakeup.S
> index f9632ee..a37680b 100644
> --- a/xen/arch/x86/boot/wakeup.S
> +++ b/xen/arch/x86/boot/wakeup.S
> @@ -144,11 +144,10 @@ wakeup_32:
>          jz      .Lskip_eferw
>          movl    $MSR_EFER,%ecx
>          rdmsr
> -        btsl    $_EFER_LME,%eax /* Long Mode      */
> -        btsl    $_EFER_SCE,%eax /* SYSCALL/SYSRET */
> -        btl     $20,%edi        /* No Execute?    */
> +        or      $EFER_LME | EFER_SCE, %eax /* Long Mode + SYSCALL/SYSRET */

I usually do: '$(EFER_LME | EFER_SCE), ...' because I think is clearer.

> +        bt      $cpufeat_bit(X86_FEATURE_NX), %edi /* No Execute? */
>          jnc     1f
> -        btsl    $_EFER_NX,%eax  /* No Execute     */
> +        or      $EFER_NXE, %eax  /* No Execute */
>  1:      wrmsr
>  .Lskip_eferw:
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index 8fbccc8..1b10f3e 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -4,7 +4,6 @@
>  /* CPU model specific register (MSR) numbers */
>  
>  /* x86-64 specific MSRs */
> -#define MSR_EFER		0xc0000080 /* extended feature register */
>  #define MSR_STAR		0xc0000081 /* legacy mode SYSCALL target */
>  #define MSR_LSTAR		0xc0000082 /* long mode SYSCALL target */
>  #define MSR_CSTAR		0xc0000083 /* compat mode SYSCALL target */
> @@ -14,26 +13,6 @@
>  #define MSR_SHADOW_GS_BASE	0xc0000102 /* SwapGS GS shadow */
>  #define MSR_TSC_AUX		0xc0000103 /* Auxiliary TSC */
>  
> -/* EFER bits: */
> -#define _EFER_SCE		0  /* SYSCALL/SYSRET */
> -#define _EFER_LME		8  /* Long mode enable */
> -#define _EFER_LMA		10 /* Long mode active (read-only) */
> -#define _EFER_NX		11 /* No execute enable */
> -#define _EFER_SVME		12 /* AMD: SVM enable */
> -#define _EFER_LMSLE		13 /* AMD: Long-mode segment limit enable */
> -#define _EFER_FFXSE		14 /* AMD: Fast FXSAVE/FXRSTOR enable */
> -
> -#define EFER_SCE		(1<<_EFER_SCE)
> -#define EFER_LME		(1<<_EFER_LME)
> -#define EFER_LMA		(1<<_EFER_LMA)
> -#define EFER_NX			(1<<_EFER_NX)
> -#define EFER_SVME		(1<<_EFER_SVME)
> -#define EFER_LMSLE		(1<<_EFER_LMSLE)
> -#define EFER_FFXSE		(1<<_EFER_FFXSE)
> -
> -#define EFER_KNOWN_MASK		(EFER_SCE | EFER_LME | EFER_LMA | EFER_NX | \
> -				 EFER_SVME | EFER_LMSLE | EFER_FFXSE)
> -
>  /* Speculation Controls. */
>  #define MSR_SPEC_CTRL			0x00000048
>  #define SPEC_CTRL_IBRS			(_AC(1, ULL) << 0)
> @@ -49,6 +28,18 @@
>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>  
> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */

Isn't the align a little bit too much?

And for clarity I would also indent the bitmasks, so:

#define MSR_EFER      0xc0000080          /* Extended Feature Enable Register */
#define   EFER_SCE    (_AC(1, ULL) <<  0) /* SYSCALL Enable */

Or some such. But maybe that doesn't match the current style of the
file.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-27 10:39   ` Roger Pau Monné
@ 2018-06-27 10:44     ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-27 10:44 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 27/06/18 11:39, Roger Pau Monné wrote:
> On Tue, Jun 26, 2018 at 02:18:13PM +0100, Andrew Cooper wrote:
>> The bit position constants are only used by the trampoline asm, but the
>> code is shorter and clearer when using the mask constants.  This halves
>> the number of constants used.
>>
>> Consistently use _AC() for the bit constants, and start to use spaces
>> for indentation.  Furthermore, EFER contains the NX-Enable bit, to
>> rename the constant to EFER_NXE to match both the AMD and Intel specs.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
> Just a couple of comments.
>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/arch/x86/boot/trampoline.S  |  2 +-
>>  xen/arch/x86/boot/wakeup.S      |  7 +++----
>>  xen/arch/x86/cpu/intel.c        |  2 +-
>>  xen/arch/x86/efi/efi-boot.h     |  2 +-
>>  xen/arch/x86/hvm/hvm.c          |  4 ++--
>>  xen/include/asm-x86/hvm/hvm.h   |  2 +-
>>  xen/include/asm-x86/msr-index.h | 33 ++++++++++++---------------------
>>  7 files changed, 21 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
>> index 5588c79..7e77e61e 100644
>> --- a/xen/arch/x86/boot/trampoline.S
>> +++ b/xen/arch/x86/boot/trampoline.S
>> @@ -138,7 +138,7 @@ trampoline_protmode_entry:
>>          or      $EFER_LME|EFER_SCE,%eax   /* Long Mode + SYSCALL/SYSRET */
>>          bt      $cpufeat_bit(X86_FEATURE_NX),%edi /* No Execute? */
>>          jnc     1f
>> -        btsl    $_EFER_NX,%eax  /* No Execute     */
>> +        or      $EFER_NXE, %eax /* No Execute */
>>  1:      wrmsr
>>  
>>          mov     $(X86_CR0_PG | X86_CR0_AM | X86_CR0_WP | X86_CR0_NE |\
>> diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch/x86/boot/wakeup.S
>> index f9632ee..a37680b 100644
>> --- a/xen/arch/x86/boot/wakeup.S
>> +++ b/xen/arch/x86/boot/wakeup.S
>> @@ -144,11 +144,10 @@ wakeup_32:
>>          jz      .Lskip_eferw
>>          movl    $MSR_EFER,%ecx
>>          rdmsr
>> -        btsl    $_EFER_LME,%eax /* Long Mode      */
>> -        btsl    $_EFER_SCE,%eax /* SYSCALL/SYSRET */
>> -        btl     $20,%edi        /* No Execute?    */
>> +        or      $EFER_LME | EFER_SCE, %eax /* Long Mode + SYSCALL/SYSRET */
> I usually do: '$(EFER_LME | EFER_SCE), ...' because I think is clearer.

Ok.

>
>> +        bt      $cpufeat_bit(X86_FEATURE_NX), %edi /* No Execute? */
>>          jnc     1f
>> -        btsl    $_EFER_NX,%eax  /* No Execute     */
>> +        or      $EFER_NXE, %eax  /* No Execute */
>>  1:      wrmsr
>>  .Lskip_eferw:
>> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
>> index 8fbccc8..1b10f3e 100644
>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -4,7 +4,6 @@
>>  /* CPU model specific register (MSR) numbers */
>>  
>>  /* x86-64 specific MSRs */
>> -#define MSR_EFER		0xc0000080 /* extended feature register */
>>  #define MSR_STAR		0xc0000081 /* legacy mode SYSCALL target */
>>  #define MSR_LSTAR		0xc0000082 /* long mode SYSCALL target */
>>  #define MSR_CSTAR		0xc0000083 /* compat mode SYSCALL target */
>> @@ -14,26 +13,6 @@
>>  #define MSR_SHADOW_GS_BASE	0xc0000102 /* SwapGS GS shadow */
>>  #define MSR_TSC_AUX		0xc0000103 /* Auxiliary TSC */
>>  
>> -/* EFER bits: */
>> -#define _EFER_SCE		0  /* SYSCALL/SYSRET */
>> -#define _EFER_LME		8  /* Long mode enable */
>> -#define _EFER_LMA		10 /* Long mode active (read-only) */
>> -#define _EFER_NX		11 /* No execute enable */
>> -#define _EFER_SVME		12 /* AMD: SVM enable */
>> -#define _EFER_LMSLE		13 /* AMD: Long-mode segment limit enable */
>> -#define _EFER_FFXSE		14 /* AMD: Fast FXSAVE/FXRSTOR enable */
>> -
>> -#define EFER_SCE		(1<<_EFER_SCE)
>> -#define EFER_LME		(1<<_EFER_LME)
>> -#define EFER_LMA		(1<<_EFER_LMA)
>> -#define EFER_NX			(1<<_EFER_NX)
>> -#define EFER_SVME		(1<<_EFER_SVME)
>> -#define EFER_LMSLE		(1<<_EFER_LMSLE)
>> -#define EFER_FFXSE		(1<<_EFER_FFXSE)
>> -
>> -#define EFER_KNOWN_MASK		(EFER_SCE | EFER_LME | EFER_LMA | EFER_NX | \
>> -				 EFER_SVME | EFER_LMSLE | EFER_FFXSE)
>> -
>>  /* Speculation Controls. */
>>  #define MSR_SPEC_CTRL			0x00000048
>>  #define SPEC_CTRL_IBRS			(_AC(1, ULL) << 0)
>> @@ -49,6 +28,18 @@
>>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>>  
>> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
>> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
>> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
>> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
>> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
>> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
>> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
>> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
> Isn't the align a little bit too much?

See later patches in this series.  EFER is the exception, being so short.

>
> And for clarity I would also indent the bitmasks, so:
>
> #define MSR_EFER      0xc0000080          /* Extended Feature Enable Register */
> #define   EFER_SCE    (_AC(1, ULL) <<  0) /* SYSCALL Enable */

I'm open to idea of indenting the bit constants, if everyone else
agrees.  This mostly affects the next patch, which introduces a style
expectation.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/6] x86/msr: Cleanup of misc constants
  2018-06-26 13:18 ` [PATCH 2/6] x86/msr: Cleanup of misc constants Andrew Cooper
  2018-06-26 15:43   ` Wei Liu
@ 2018-06-27 10:48   ` Roger Pau Monné
  1 sibling, 0 replies; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 10:48 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:14PM +0100, Andrew Cooper wrote:
> Begin the process of cleaning up msr-index.h.  Order the MSRs at the
> head of the file by index, use spaces for indentation, _AC() for bit
> positions, and add a comment describing the expected style.  Abbreviate
> the ARCH_CAPS_* constants to reduce code volume.
> 
> Leave a trailing comment to logically split the file in two, between the
> now-consistent head, and legacy tail which will be improved moving
> forwards.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I would just make the same comment as before about indenting bit
defines of features in MSRs with 2 spaces.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
  2018-06-26 17:59   ` Andrew Cooper
@ 2018-06-27 11:08   ` Wei Liu
  2018-06-27 11:21   ` Roger Pau Monné
  2018-06-28 13:11   ` Jan Beulich
  3 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-27 11:08 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Xen-devel, Jun Nakajima,
	Roger Pau Monné

On Tue, Jun 26, 2018 at 02:18:16PM +0100, Andrew Cooper wrote:
> The existing bit names are excessively long (45 chars!), and can be trimmed
> down substantially.  Drop the IA32 prefix and abbreviate FEATURE_CONTROL to
> FEAT_CTL.  Furthermore, all of these are feature enablement bits, so drop
> ENABLE/ON parts of the constants.
> 
> While altering all the users, take the opportunity to introduce cpu_has_smx
> and clean up _vmx_cpu_up() with respect to its MSR_FEATURE_CONTROL handling.
> 
> The SENTER constants are unreferenced and dropped.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

Re mwait-idle.c I don't think this patch is going to make future
importing more difficult than it already is.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants
  2018-06-26 13:18 ` [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants Andrew Cooper
  2018-06-26 16:31   ` Wei Liu
@ 2018-06-27 11:08   ` Roger Pau Monné
  2018-06-28 13:04   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 11:08 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:15PM +0100, Andrew Cooper wrote:
> These MSRs, while being Intel specific, are used to offer virtualised
> CPUID faulting support on AMD hardware, so remove the INTEL infix.
> 
> The bit position constants are used by guest_rdmsr(), but the logic can
> be expressed using MASK_INSR() which allows the removal of the bit
> position constants.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
  2018-06-26 17:59   ` Andrew Cooper
  2018-06-27 11:08   ` Wei Liu
@ 2018-06-27 11:21   ` Roger Pau Monné
  2018-06-28 13:11   ` Jan Beulich
  3 siblings, 0 replies; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 11:21 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:16PM +0100, Andrew Cooper wrote:
> The existing bit names are excessively long (45 chars!), and can be trimmed
> down substantially.  Drop the IA32 prefix and abbreviate FEATURE_CONTROL to
> FEAT_CTL.  Furthermore, all of these are feature enablement bits, so drop
> ENABLE/ON parts of the constants.
> 
> While altering all the users, take the opportunity to introduce cpu_has_smx
> and clean up _vmx_cpu_up() with respect to its MSR_FEATURE_CONTROL handling.
> 
> The SENTER constants are unreferenced and dropped.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 884c672..95a0e37 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -610,9 +610,9 @@ void vmx_cpu_dead(unsigned int cpu)
>  
>  int _vmx_cpu_up(bool bsp)
>  {
> -    u32 eax, edx;
> -    int rc, bios_locked, cpu = smp_processor_id();
> -    u64 cr0, vmx_cr0_fixed0, vmx_cr0_fixed1;
> +    int rc, cpu = smp_processor_id();

While there you could switch cpu to unsigned int.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants
  2018-06-26 13:18 ` [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants Andrew Cooper
@ 2018-06-27 13:26   ` Wei Liu
  2018-06-27 13:32   ` Roger Pau Monné
  1 sibling, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-27 13:26 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:17PM +0100, Andrew Cooper wrote:
> We currently have MSR_IA32_APICBASE and MSR_IA32_APICBASE_MSR which are
> synonymous from a naming point of view, but refer to very different
> things.
> 
> Cleave out the handling of MSR_APIC_BASE (0x1b), and rename
> MSR_IA32_APICBASE_BASE to APIC_BASE_ADDR_MASK to better describe its
> purpose.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants
  2018-06-26 13:18 ` [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants Andrew Cooper
@ 2018-06-27 13:26   ` Wei Liu
  2018-06-27 13:50   ` Roger Pau Monné
  2018-06-28 13:18   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-27 13:26 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:18PM +0100, Andrew Cooper wrote:
> The name MSR_IA32_APICBASE_MSR doesn't logically relate to its purpose.
> Rename it to MSR_X2APIC_FIRST and introduce a corresponding
> MSR_X2APIC_LAST to avoid opencoding the length of the x2APIC MSR range.
> 
> For the specific registers, drop the IA32 infix, break the APIC part
> away from the register name, and drop the MSR suffix.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants
  2018-06-26 13:18 ` [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants Andrew Cooper
  2018-06-27 13:26   ` Wei Liu
@ 2018-06-27 13:32   ` Roger Pau Monné
  2018-06-27 13:35     ` Andrew Cooper
  1 sibling, 1 reply; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 13:32 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:17PM +0100, Andrew Cooper wrote:
> We currently have MSR_IA32_APICBASE and MSR_IA32_APICBASE_MSR which are
> synonymous from a naming point of view, but refer to very different
> things.
> 
> Cleave out the handling of MSR_APIC_BASE (0x1b), and rename
> MSR_IA32_APICBASE_BASE to APIC_BASE_ADDR_MASK to better describe its
> purpose.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
> index ffa5a69..aa677e0 100644
> --- a/xen/arch/x86/apic.c
> +++ b/xen/arch/x86/apic.c
> @@ -1446,23 +1448,21 @@ void __init record_boot_APIC_mode(void)
>                  apic_mode_to_str(apic_boot_mode));
>  }
>  
> -/* Look at the bits in MSR_IA32_APICBASE and work out which
> - * APIC mode we are in */
> +/* Look at the bits in MSR_APIC_BASE and work out which APIC mode we are in */
>  enum apic_mode current_local_apic_mode(void)
>  {
>      u64 msr_contents;
>  
> -    rdmsrl(MSR_IA32_APICBASE, msr_contents);
> +    rdmsrl(MSR_APIC_BASE, msr_contents);
>  
>      /* Reading EXTD bit from the MSR is only valid if CPUID
>       * says so, else reserved */
> -    if ( boot_cpu_has(X86_FEATURE_X2APIC)
> -         && (msr_contents & MSR_IA32_APICBASE_EXTD) )
> +    if ( boot_cpu_has(X86_FEATURE_X2APIC) && (msr_contents & APIC_BASE_EXTD) )

While there you could change it to cpu_has_x2apic.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants
  2018-06-27 13:32   ` Roger Pau Monné
@ 2018-06-27 13:35     ` Andrew Cooper
  2018-06-27 14:50       ` Andrew Cooper
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-06-27 13:35 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 27/06/18 14:32, Roger Pau Monné wrote:
> On Tue, Jun 26, 2018 at 02:18:17PM +0100, Andrew Cooper wrote:
>> We currently have MSR_IA32_APICBASE and MSR_IA32_APICBASE_MSR which are
>> synonymous from a naming point of view, but refer to very different
>> things.
>>
>> Cleave out the handling of MSR_APIC_BASE (0x1b), and rename
>> MSR_IA32_APICBASE_BASE to APIC_BASE_ADDR_MASK to better describe its
>> purpose.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
>> diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
>> index ffa5a69..aa677e0 100644
>> --- a/xen/arch/x86/apic.c
>> +++ b/xen/arch/x86/apic.c
>> @@ -1446,23 +1448,21 @@ void __init record_boot_APIC_mode(void)
>>                  apic_mode_to_str(apic_boot_mode));
>>  }
>>  
>> -/* Look at the bits in MSR_IA32_APICBASE and work out which
>> - * APIC mode we are in */
>> +/* Look at the bits in MSR_APIC_BASE and work out which APIC mode we are in */
>>  enum apic_mode current_local_apic_mode(void)
>>  {
>>      u64 msr_contents;
>>  
>> -    rdmsrl(MSR_IA32_APICBASE, msr_contents);
>> +    rdmsrl(MSR_APIC_BASE, msr_contents);
>>  
>>      /* Reading EXTD bit from the MSR is only valid if CPUID
>>       * says so, else reserved */
>> -    if ( boot_cpu_has(X86_FEATURE_X2APIC)
>> -         && (msr_contents & MSR_IA32_APICBASE_EXTD) )
>> +    if ( boot_cpu_has(X86_FEATURE_X2APIC) && (msr_contents & APIC_BASE_EXTD) )
> While there you could change it to cpu_has_x2apic.

So I can.  Thanks,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers
  2018-06-26 18:22 ` [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers Andrew Cooper
@ 2018-06-27 13:35   ` Wei Liu
  2018-06-27 14:17   ` Roger Pau Monné
  2018-06-28 13:26   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-06-27 13:35 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 07:22:44PM +0100, Andrew Cooper wrote:
> One reoccuring code pattern is to read an MSR, modify one or more bits,
> and write the result back.  Introduce helpers for this purpose.
> 
> First, introduce rdmsr_split() and wrmsr_split() which are tiny static inline
> wrappers which deal with the MSR value in two 32bit halves.
> 
> Next, construct msr_{set,clear}_bits() in terms of the {rdmsr,wrmsr}_split().
> The mask operations are deliberately performed as 32bit operations, because
> all callers pass in a constant to the mask parameter, and in all current
> cases, one of the two operations can be elided.
> 
> For MSR_IA32_PSR_L3_QOS_CFG, switch PSR_L3_QOS_CDP_ENABLE from being a bit
> position variable to being a plain number.
> 
> The resulting C is shorter, and doesn't require a temporary variable.  The
> generated ASM is also more efficient, because of avoiding the
> packing/unpacking operations.  e.g. the delta in the first hunk is from:
> 
>   b9 1b 00 00 00          mov    $0x1b,%ecx
>   0f 32                   rdmsr
>   48 c1 e2 20             shl    $0x20,%rdx
>   48 09 d0                or     %rdx,%rax
>   80 e4 f3                and    $0xf3,%ah
>   48 89 c2                mov    %rax,%rdx
>   48 c1 ea 20             shr    $0x20,%rdx
>   0f 30                   wrmsr
> 
> to:
> 
>   b9 1b 00 00 00          mov    $0x1b,%ecx
>   0f 32                   rdmsr
>   80 e4 f3                and    $0xf3,%ah
>   0f 30                   wrmsr
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants
  2018-06-26 13:18 ` [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants Andrew Cooper
  2018-06-27 13:26   ` Wei Liu
@ 2018-06-27 13:50   ` Roger Pau Monné
  2018-06-27 14:15     ` Andrew Cooper
  2018-06-28 13:18   ` Jan Beulich
  2 siblings, 1 reply; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 13:50 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 02:18:18PM +0100, Andrew Cooper wrote:
> The name MSR_IA32_APICBASE_MSR doesn't logically relate to its purpose.
> Rename it to MSR_X2APIC_FIRST and introduce a corresponding
> MSR_X2APIC_LAST to avoid opencoding the length of the x2APIC MSR range.
> 
> For the specific registers, drop the IA32 infix, break the APIC part
> away from the register name, and drop the MSR suffix.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Although I have some questions about the existing code.

> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index d5334c9..48e2f8c 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2995,19 +2995,19 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>                  SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
>              if ( cpu_has_vmx_apic_reg_virt )
>              {
> -                for ( msr = MSR_IA32_APICBASE_MSR;
> -                      msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
> +                for ( msr = MSR_X2APIC_FIRST;
> +                      msr <= MSR_X2APIC_FIRST + 0xff; msr++ )
>                      vmx_clear_msr_intercept(v, msr, VMX_MSR_R);

I realize this is existing code, but do you know why 0xff is used here
instead of 0xbff (MSR_X2APIC_LAST) or 0x83f (which is the last
implemented x2APIC MSR)?.

>  
> -                vmx_set_msr_intercept(v, MSR_IA32_APICPPR_MSR, VMX_MSR_R);
> -                vmx_set_msr_intercept(v, MSR_IA32_APICTMICT_MSR, VMX_MSR_R);
> -                vmx_set_msr_intercept(v, MSR_IA32_APICTMCCT_MSR, VMX_MSR_R);
> +                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R);
> +                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R);
> +                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R);
>              }
>              if ( cpu_has_vmx_virtual_intr_delivery )
>              {
> -                vmx_clear_msr_intercept(v, MSR_IA32_APICTPR_MSR, VMX_MSR_W);
> -                vmx_clear_msr_intercept(v, MSR_IA32_APICEOI_MSR, VMX_MSR_W);
> -                vmx_clear_msr_intercept(v, MSR_IA32_APICSELF_MSR, VMX_MSR_W);
> +                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W);
> +                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W);
> +                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W);
>              }
>          }
>          else
> @@ -3016,8 +3016,8 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>      }
>      if ( !(v->arch.hvm_vmx.secondary_exec_control &
>             SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
> -        for ( msr = MSR_IA32_APICBASE_MSR;
> -              msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
> +        for ( msr = MSR_X2APIC_FIRST;
> +              msr <= MSR_X2APIC_FIRST + 0xff; msr++ )
>              vmx_set_msr_intercept(v, msr, VMX_MSR_RW);
>  
>      vmx_update_secondary_exec_control(v);
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index ce2e847..9d96e96 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -49,6 +49,16 @@
>  #define MSR_MISC_FEATURES_ENABLES       0x00000140
>  #define MISC_FEATURES_CPUID_FAULTING    (_AC(1, ULL) <<  0)
>  
> +#define MSR_X2APIC_FIRST                0x00000800
> +#define MSR_X2APIC_LAST                 0x00000bff

I would use START and END because I think it's more natural rather
that FIRST and LAST which to me seem to involve there being multiple
x2APIC inside this range (but I'm not a native speaker, so FIRST and
LAST might be just fine).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants
  2018-06-27 13:50   ` Roger Pau Monné
@ 2018-06-27 14:15     ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-27 14:15 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 27/06/18 14:50, Roger Pau Monné wrote:
> On Tue, Jun 26, 2018 at 02:18:18PM +0100, Andrew Cooper wrote:
>> The name MSR_IA32_APICBASE_MSR doesn't logically relate to its purpose.
>> Rename it to MSR_X2APIC_FIRST and introduce a corresponding
>> MSR_X2APIC_LAST to avoid opencoding the length of the x2APIC MSR range.
>>
>> For the specific registers, drop the IA32 infix, break the APIC part
>> away from the register name, and drop the MSR suffix.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
> Although I have some questions about the existing code.
>
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index d5334c9..48e2f8c 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -2995,19 +2995,19 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>>                  SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
>>              if ( cpu_has_vmx_apic_reg_virt )
>>              {
>> -                for ( msr = MSR_IA32_APICBASE_MSR;
>> -                      msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
>> +                for ( msr = MSR_X2APIC_FIRST;
>> +                      msr <= MSR_X2APIC_FIRST + 0xff; msr++ )
>>                      vmx_clear_msr_intercept(v, msr, VMX_MSR_R);
> I realize this is existing code, but do you know why 0xff is used here
> instead of 0xbff (MSR_X2APIC_LAST) or 0x83f (which is the last
> implemented x2APIC MSR)?.

Good questions.  0xbff can't be used because the SandyBridge uarch have
some undocumented MSRs implemented in that range.

There appears to be no justification for 0xff in the patch which
introduced it, and 0x83f would be a more appropriate upper bound. I'll
submit a separate fix for this, as there is a far far more efficient way
do this operation.

>
>>  
>> -                vmx_set_msr_intercept(v, MSR_IA32_APICPPR_MSR, VMX_MSR_R);
>> -                vmx_set_msr_intercept(v, MSR_IA32_APICTMICT_MSR, VMX_MSR_R);
>> -                vmx_set_msr_intercept(v, MSR_IA32_APICTMCCT_MSR, VMX_MSR_R);
>> +                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R);
>> +                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R);
>> +                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R);
>>              }
>>              if ( cpu_has_vmx_virtual_intr_delivery )
>>              {
>> -                vmx_clear_msr_intercept(v, MSR_IA32_APICTPR_MSR, VMX_MSR_W);
>> -                vmx_clear_msr_intercept(v, MSR_IA32_APICEOI_MSR, VMX_MSR_W);
>> -                vmx_clear_msr_intercept(v, MSR_IA32_APICSELF_MSR, VMX_MSR_W);
>> +                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W);
>> +                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W);
>> +                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W);
>>              }
>>          }
>>          else
>> @@ -3016,8 +3016,8 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>>      }
>>      if ( !(v->arch.hvm_vmx.secondary_exec_control &
>>             SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
>> -        for ( msr = MSR_IA32_APICBASE_MSR;
>> -              msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
>> +        for ( msr = MSR_X2APIC_FIRST;
>> +              msr <= MSR_X2APIC_FIRST + 0xff; msr++ )
>>              vmx_set_msr_intercept(v, msr, VMX_MSR_RW);
>>  
>>      vmx_update_secondary_exec_control(v);
>> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
>> index ce2e847..9d96e96 100644
>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -49,6 +49,16 @@
>>  #define MSR_MISC_FEATURES_ENABLES       0x00000140
>>  #define MISC_FEATURES_CPUID_FAULTING    (_AC(1, ULL) <<  0)
>>  
>> +#define MSR_X2APIC_FIRST                0x00000800
>> +#define MSR_X2APIC_LAST                 0x00000bff
> I would use START and END because I think it's more natural rather
> that FIRST and LAST which to me seem to involve there being multiple
> x2APIC inside this range (but I'm not a native speaker, so FIRST and
> LAST might be just fine).

FIRST/LAST are entirely fine in this context, as far as English goes. 
Last tends to be less ambiguous than end when it comes to fencepost
errors, as it is an inclusive term.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers
  2018-06-26 18:22 ` [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers Andrew Cooper
  2018-06-27 13:35   ` Wei Liu
@ 2018-06-27 14:17   ` Roger Pau Monné
  2018-06-27 14:27     ` Andrew Cooper
  2018-06-28 13:26   ` Jan Beulich
  2 siblings, 1 reply; 39+ messages in thread
From: Roger Pau Monné @ 2018-06-27 14:17 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Tue, Jun 26, 2018 at 07:22:44PM +0100, Andrew Cooper wrote:
> One reoccuring code pattern is to read an MSR, modify one or more bits,
> and write the result back.  Introduce helpers for this purpose.
> 
> First, introduce rdmsr_split() and wrmsr_split() which are tiny static inline
> wrappers which deal with the MSR value in two 32bit halves.

I think this needs some kind of explanation, since rdmsr/wrmsr already deal
with MSR in two 32bit halves.

> Next, construct msr_{set,clear}_bits() in terms of the {rdmsr,wrmsr}_split().
> The mask operations are deliberately performed as 32bit operations, because
> all callers pass in a constant to the mask parameter, and in all current
> cases, one of the two operations can be elided.
> 
> For MSR_IA32_PSR_L3_QOS_CFG, switch PSR_L3_QOS_CDP_ENABLE from being a bit
> position variable to being a plain number.
> 
> The resulting C is shorter, and doesn't require a temporary variable.  The
> generated ASM is also more efficient, because of avoiding the
> packing/unpacking operations.  e.g. the delta in the first hunk is from:
> 
>   b9 1b 00 00 00          mov    $0x1b,%ecx
>   0f 32                   rdmsr
>   48 c1 e2 20             shl    $0x20,%rdx
>   48 09 d0                or     %rdx,%rax
>   80 e4 f3                and    $0xf3,%ah
>   48 89 c2                mov    %rax,%rdx
>   48 c1 ea 20             shr    $0x20,%rdx
>   0f 30                   wrmsr
> 
> to:
> 
>   b9 1b 00 00 00          mov    $0x1b,%ecx
>   0f 32                   rdmsr
>   80 e4 f3                and    $0xf3,%ah
>   0f 30                   wrmsr
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

If the intention behind introducing the _split helpers is detailed in
the commit message.

> diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
> index 2d285d0..c5f171d 100644
> --- a/xen/arch/x86/cpu/mcheck/mce_intel.c
> +++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
> @@ -164,11 +164,8 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
>      val |= (APIC_DM_FIXED | APIC_LVT_MASKED);  /* we'll mask till we're ready */
>      apic_write(APIC_LVTTHMR, val);
>  
> -    rdmsrl(MSR_IA32_THERM_INTERRUPT, msr_content);
> -    wrmsrl(MSR_IA32_THERM_INTERRUPT, msr_content | 0x03);
> -
> -    rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
> -    wrmsrl(MSR_IA32_MISC_ENABLE, msr_content | (1ULL<<3));
> +    msr_set_bits(MSR_IA32_THERM_INTERRUPT, 0x3);
> +    msr_set_bits(MSR_IA32_MISC_ENABLE, 1 << 3);
>  
>      apic_write(APIC_LVTTHMR, val & ~APIC_LVT_MASKED);
>      if ( opt_cpu_info )
> diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
> index 09bb3f4..6619af9 100644
> --- a/xen/arch/x86/efi/efi-boot.h
> +++ b/xen/arch/x86/efi/efi-boot.h
> @@ -229,18 +229,15 @@ static void __init efi_arch_pre_exit_boot(void)
>  
>  static void __init noreturn efi_arch_post_exit_boot(void)
>  {
> -    u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;
> +    bool nx = cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX);
> +    uint64_t cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, tmp;
>  
>      efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
>      memcpy((void *)trampoline_phys, trampoline_start, cfg.size);
>  
>      /* Set system registers and transfer control. */
>      asm volatile("pushq $0\n\tpopfq");
> -    rdmsrl(MSR_EFER, efer);
> -    efer |= EFER_SCE;
> -    if ( cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX) )
> -        efer |= EFER_NXE;
> -    wrmsrl(MSR_EFER, efer);
> +    msr_set_bits(MSR_EFER, EFER_SCE | (nx ? EFER_NXE : 0));

I think you can directly use cpu_has_nx?

Also isn't NX always present on amd64 capable CPUs?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers
  2018-06-27 14:17   ` Roger Pau Monné
@ 2018-06-27 14:27     ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-27 14:27 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 27/06/18 15:17, Roger Pau Monné wrote:
> On Tue, Jun 26, 2018 at 07:22:44PM +0100, Andrew Cooper wrote:
>> One reoccuring code pattern is to read an MSR, modify one or more bits,
>> and write the result back.  Introduce helpers for this purpose.
>>
>> First, introduce rdmsr_split() and wrmsr_split() which are tiny static inline
>> wrappers which deal with the MSR value in two 32bit halves.
> I think this needs some kind of explanation, since rdmsr/wrmsr already deal
> with MSR in two 32bit halves.

If you look closely, that's not how rdmsr() works.  All of these macros
are for the chopping block over the course of the MSR cleanup.

>> diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
>> index 09bb3f4..6619af9 100644
>> --- a/xen/arch/x86/efi/efi-boot.h
>> +++ b/xen/arch/x86/efi/efi-boot.h
>> @@ -229,18 +229,15 @@ static void __init efi_arch_pre_exit_boot(void)
>>  
>>  static void __init noreturn efi_arch_post_exit_boot(void)
>>  {
>> -    u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;
>> +    bool nx = cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX);
>> +    uint64_t cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, tmp;
>>  
>>      efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
>>      memcpy((void *)trampoline_phys, trampoline_start, cfg.size);
>>  
>>      /* Set system registers and transfer control. */
>>      asm volatile("pushq $0\n\tpopfq");
>> -    rdmsrl(MSR_EFER, efer);
>> -    efer |= EFER_SCE;
>> -    if ( cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX) )
>> -        efer |= EFER_NXE;
>> -    wrmsrl(MSR_EFER, efer);
>> +    msr_set_bits(MSR_EFER, EFER_SCE | (nx ? EFER_NXE : 0));
> I think you can directly use cpu_has_nx?

boot_cpu_data[] isn't filled at this point during boot.

> Also isn't NX always present on amd64 capable CPUs?

If only. :(

The first generation of Intel's 64bit processors don't have NX.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants
  2018-06-27 13:35     ` Andrew Cooper
@ 2018-06-27 14:50       ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-06-27 14:50 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 27/06/18 14:35, Andrew Cooper wrote:
> On 27/06/18 14:32, Roger Pau Monné wrote:
>> On Tue, Jun 26, 2018 at 02:18:17PM +0100, Andrew Cooper wrote:
>>> We currently have MSR_IA32_APICBASE and MSR_IA32_APICBASE_MSR which are
>>> synonymous from a naming point of view, but refer to very different
>>> things.
>>>
>>> Cleave out the handling of MSR_APIC_BASE (0x1b), and rename
>>> MSR_IA32_APICBASE_BASE to APIC_BASE_ADDR_MASK to better describe its
>>> purpose.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>>
>>> diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
>>> index ffa5a69..aa677e0 100644
>>> --- a/xen/arch/x86/apic.c
>>> +++ b/xen/arch/x86/apic.c
>>> @@ -1446,23 +1448,21 @@ void __init record_boot_APIC_mode(void)
>>>                  apic_mode_to_str(apic_boot_mode));
>>>  }
>>>  
>>> -/* Look at the bits in MSR_IA32_APICBASE and work out which
>>> - * APIC mode we are in */
>>> +/* Look at the bits in MSR_APIC_BASE and work out which APIC mode we are in */
>>>  enum apic_mode current_local_apic_mode(void)
>>>  {
>>>      u64 msr_contents;
>>>  
>>> -    rdmsrl(MSR_IA32_APICBASE, msr_contents);
>>> +    rdmsrl(MSR_APIC_BASE, msr_contents);
>>>  
>>>      /* Reading EXTD bit from the MSR is only valid if CPUID
>>>       * says so, else reserved */
>>> -    if ( boot_cpu_has(X86_FEATURE_X2APIC)
>>> -         && (msr_contents & MSR_IA32_APICBASE_EXTD) )
>>> +    if ( boot_cpu_has(X86_FEATURE_X2APIC) && (msr_contents & APIC_BASE_EXTD) )
>> While there you could change it to cpu_has_x2apic.
> So I can.  Thanks,

Actually, this code is some of my earliest contributions to Xen, and my
current self is really regretting the lack of review of my older self's
code.

Both comments are false and the check isn't necessary.  I've fixed all
this up and added a comment to the commit message.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
  2018-06-26 15:33   ` Wei Liu
  2018-06-27 10:39   ` Roger Pau Monné
@ 2018-06-28 13:00   ` Jan Beulich
  2018-06-28 13:36     ` Andrew Cooper
  2 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2018-06-28 13:00 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
> @@ -49,6 +28,18 @@
>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>  
> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
> +
> +#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
> +                         EFER_SVME | EFER_LMSLE | EFER_FFXSE)

When meaning to clean up and consolidate these and others, why
don't we switch to architectural MSR names at the same time? While
this will increase source size a little, it'll
- allow grep-ing for the MSRs' uses by their SDM names,
- significantly reduce the risk of name clashes with something on e.g.
  the arm side (EFER may not be the most risky one here, but some
  of the subsequent patches certainly seem to incur such a risk).

I.e. here MSR_IA32_EFER and IA32_EFER_SCE etc.

Other than this I'm certainly fine in general with this cleanup.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants
  2018-06-26 13:18 ` [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants Andrew Cooper
  2018-06-26 16:31   ` Wei Liu
  2018-06-27 11:08   ` Roger Pau Monné
@ 2018-06-28 13:04   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-06-28 13:04 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
> These MSRs, while being Intel specific, are used to offer virtualised
> CPUID faulting support on AMD hardware, so remove the INTEL infix.
> 
> The bit position constants are used by guest_rdmsr(), but the logic can
> be expressed using MASK_INSR() which allows the removal of the bit
> position constants.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-06-27 11:21   ` Roger Pau Monné
@ 2018-06-28 13:11   ` Jan Beulich
  2018-07-02  5:56     ` Tian, Kevin
  3 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2018-06-28 13:11 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -15,6 +15,13 @@
>   * abbreviated name.
>   */
>  
> +#define MSR_FEATURE_CONTROL             0x0000003a
> +#define FEAT_CTL_LOCK                   (_AC(1, ULL) <<  0)
> +#define FEAT_CTL_VMX_INSIDE_SMX         (_AC(1, ULL) <<  1)
> +#define FEAT_CTL_VMX_OUTSIDE_SMX        (_AC(1, ULL) <<  2)
> +#define FEAT_CTL_SGX                    (_AC(1, ULL) << 18)
> +#define FEAT_CTL_LMCE                   (_AC(1, ULL) << 20)

So this is a good example a case where I'd be rather afraid of possible
name clashes. I fully agree ...

> @@ -321,15 +328,6 @@
>  #define MSR_IA32_EBL_CR_POWERON		0x0000002a
>  #define MSR_IA32_EBC_FREQUENCY_ID	0x0000002c
>  
> -#define MSR_IA32_FEATURE_CONTROL	0x0000003a
> -#define IA32_FEATURE_CONTROL_LOCK                     0x0001
> -#define IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX  0x0002
> -#define IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX 0x0004

... that especially these two are excessively long. But omitting the
IA32 infix plus shortening FEATURE_CONTROL to FEAT_CTL is not
helpful. The latter even is against the naming rules set forth in
patch 2.

I'd be fine with deviating from the SDM here, using MSR_IA32_FEAT_CTL
and IA32_FEAT_CTL_* (perhaps with a comment citing the SDM name,
to make it noticable to someone grep-ing).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants
  2018-06-26 13:18 ` [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants Andrew Cooper
  2018-06-27 13:26   ` Wei Liu
  2018-06-27 13:50   ` Roger Pau Monné
@ 2018-06-28 13:18   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-06-28 13:18 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2995,19 +2995,19 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>                  SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
>              if ( cpu_has_vmx_apic_reg_virt )
>              {
> -                for ( msr = MSR_IA32_APICBASE_MSR;
> -                      msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
> +                for ( msr = MSR_X2APIC_FIRST;
> +                      msr <= MSR_X2APIC_FIRST + 0xff; msr++ )

With the comment mad on the odd upper bound here, and ...

> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -49,6 +49,16 @@
>  #define MSR_MISC_FEATURES_ENABLES       0x00000140
>  #define MISC_FEATURES_CPUID_FAULTING    (_AC(1, ULL) <<  0)
>  
> +#define MSR_X2APIC_FIRST                0x00000800
> +#define MSR_X2APIC_LAST                 0x00000bff

... with you having made clear yourself that there are non-x2APIC
MSRs in this range on at least some models, wouldn't we be better off
with a lower upper bound here, perhaps with a comment explaining
the difference to the theoretical upper bound? At the very least I'd
find it rather helpful for the 0xff above to go away; iirc you've said
you have a clever idea there.

> +#define MSR_X2APIC_TPR                  0x00000808
> +#define MSR_X2APIC_PPR                  0x0000080a
> +#define MSR_X2APIC_EOI                  0x0000080b
> +#define MSR_X2APIC_TMICT                0x00000838
> +#define MSR_X2APIC_TMCCT                0x00000839
> +#define MSR_X2APIC_SELF                 0x0000083f

Take the opportunity and complete the set?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers
  2018-06-26 18:22 ` [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers Andrew Cooper
  2018-06-27 13:35   ` Wei Liu
  2018-06-27 14:17   ` Roger Pau Monné
@ 2018-06-28 13:26   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-06-28 13:26 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 26.06.18 at 20:22, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -11,6 +11,11 @@
>  #include <asm/asm_defns.h>
>  #include <asm/cpufeature.h>
>  
> +static inline void rdmsr_split(unsigned int msr, uint32_t *lo, uint32_t *hi)
> +{
> +    asm volatile ( "rdmsr" : "=a" (*lo), "=d" (*hi) : "c" (msr) );

What's the reason for not making this ...

> +}
> +
>  #define rdmsr(msr,val1,val2) \
>       __asm__ __volatile__("rdmsr" \
>  			  : "=a" (val1), "=d" (val2) \

... use this and ...

> @@ -23,6 +28,11 @@
>         val = a__ | ((u64)b__<<32); \
>  } while(0)
>  
> +static inline void wrmsr_split(unsigned int msr, uint32_t lo, uint32_t hi)
> +{
> +    asm volatile ( "wrmsr" :: "a" (lo), "d" (hi), "c" (msr) );

... this use ... 

> +}
> +
>  #define wrmsr(msr,val1,val2) \
>       __asm__ __volatile__("wrmsr" \
>  			  : /* no outputs */ \

... this? In fact it's not really clear why we need the inline functions _and_
the macros. Afaict the two new functions below could also be implemented
with just the macros.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-28 13:00   ` Jan Beulich
@ 2018-06-28 13:36     ` Andrew Cooper
  2018-06-28 13:56       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-06-28 13:36 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

On 28/06/18 14:00, Jan Beulich wrote:
>>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
>> @@ -49,6 +28,18 @@
>>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>>  
>> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
>> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
>> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
>> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
>> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
>> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
>> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
>> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
>> +
>> +#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
>> +                         EFER_SVME | EFER_LMSLE | EFER_FFXSE)
> When meaning to clean up and consolidate these and others, why
> don't we switch to architectural MSR names at the same time? While
> this will increase source size a little, it'll
> - allow grep-ing for the MSRs' uses by their SDM names,
> - significantly reduce the risk of name clashes with something on e.g.
>   the arm side (EFER may not be the most risky one here, but some
>   of the subsequent patches certainly seem to incur such a risk).
>
> I.e. here MSR_IA32_EFER and IA32_EFER_SCE etc.
>
> Other than this I'm certainly fine in general with this cleanup.

Removing IA32 is a deliberate and intended properly.  The
non-architectural vs architectural nature of MSRs changes over time
meaning the names here get stale.

As for grepability, most MSRs can't currently be located like that, and
(naming instability aside) I believe the reduction in code volume is
more important property to have.

There is no chance of clashing with ARM, as these are all arch-specific
constants.  Any common code referencing them should be fixed by becoming
arch-specific code.

As for other clashes, that is always a (slim) risk, but if we encounter
such a problem, we can easily fix the problem there an then.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-28 13:36     ` Andrew Cooper
@ 2018-06-28 13:56       ` Jan Beulich
  2018-09-07 14:47         ` Andrew Cooper
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2018-06-28 13:56 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.06.18 at 15:36, <andrew.cooper3@citrix.com> wrote:
> On 28/06/18 14:00, Jan Beulich wrote:
>>>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
>>> @@ -49,6 +28,18 @@
>>>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>>>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>>>  
>>> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
>>> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
>>> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
>>> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
>>> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
>>> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
>>> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
>>> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
>>> +
>>> +#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
>>> +                         EFER_SVME | EFER_LMSLE | EFER_FFXSE)
>> When meaning to clean up and consolidate these and others, why
>> don't we switch to architectural MSR names at the same time? While
>> this will increase source size a little, it'll
>> - allow grep-ing for the MSRs' uses by their SDM names,
>> - significantly reduce the risk of name clashes with something on e.g.
>>   the arm side (EFER may not be the most risky one here, but some
>>   of the subsequent patches certainly seem to incur such a risk).
>>
>> I.e. here MSR_IA32_EFER and IA32_EFER_SCE etc.
>>
>> Other than this I'm certainly fine in general with this cleanup.
> 
> Removing IA32 is a deliberate and intended properly.  The
> non-architectural vs architectural nature of MSRs changes over time
> meaning the names here get stale.

But I don't think they've ever changed from IA32 to no IA32. I.e.
once an MSR becomes architectural, we could rename it once and
be done.

> As for grepability, most MSRs can't currently be located like that, and
> (naming instability aside) I believe the reduction in code volume is
> more important property to have.

The fact that "most MSRs can't currently be located like that" is
what I've long hoped we could overcome.

> There is no chance of clashing with ARM, as these are all arch-specific
> constants.  Any common code referencing them should be fixed by becoming
> arch-specific code.

The risk is for this to go unnoticed for a while.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants
  2018-06-28 13:11   ` Jan Beulich
@ 2018-07-02  5:56     ` Tian, Kevin
  0 siblings, 0 replies; 39+ messages in thread
From: Tian, Kevin @ 2018-07-02  5:56 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: Xen-devel, Wei Liu, Nakajima, Jun, Roger Pau Monne

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, June 28, 2018 9:11 PM
> 
> >>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
> > --- a/xen/include/asm-x86/msr-index.h
> > +++ b/xen/include/asm-x86/msr-index.h
> > @@ -15,6 +15,13 @@
> >   * abbreviated name.
> >   */
> >
> > +#define MSR_FEATURE_CONTROL             0x0000003a
> > +#define FEAT_CTL_LOCK                   (_AC(1, ULL) <<  0)
> > +#define FEAT_CTL_VMX_INSIDE_SMX         (_AC(1, ULL) <<  1)
> > +#define FEAT_CTL_VMX_OUTSIDE_SMX        (_AC(1, ULL) <<  2)
> > +#define FEAT_CTL_SGX                    (_AC(1, ULL) << 18)
> > +#define FEAT_CTL_LMCE                   (_AC(1, ULL) << 20)
> 
> So this is a good example a case where I'd be rather afraid of possible
> name clashes. I fully agree ...
> 
> > @@ -321,15 +328,6 @@
> >  #define MSR_IA32_EBL_CR_POWERON		0x0000002a
> >  #define MSR_IA32_EBC_FREQUENCY_ID	0x0000002c
> >
> > -#define MSR_IA32_FEATURE_CONTROL	0x0000003a
> > -#define IA32_FEATURE_CONTROL_LOCK                     0x0001
> > -#define IA32_FEATURE_CONTROL_ENABLE_VMXON_INSIDE_SMX
> 0x0002
> > -#define IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX
> 0x0004
> 
> ... that especially these two are excessively long. But omitting the
> IA32 infix plus shortening FEATURE_CONTROL to FEAT_CTL is not
> helpful. The latter even is against the naming rules set forth in
> patch 2.
> 
> I'd be fine with deviating from the SDM here, using MSR_IA32_FEAT_CTL
> and IA32_FEAT_CTL_* (perhaps with a comment citing the SDM name,
> to make it noticable to someone grep-ing).
> 

Agree. FEAT_CTL alone is too broad...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-06-28 13:56       ` Jan Beulich
@ 2018-09-07 14:47         ` Andrew Cooper
  2018-09-07 15:09           ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-09-07 14:47 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

On 28/06/18 14:56, Jan Beulich wrote:
>>>> On 28.06.18 at 15:36, <andrew.cooper3@citrix.com> wrote:
>> On 28/06/18 14:00, Jan Beulich wrote:
>>>>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
>>>> @@ -49,6 +28,18 @@
>>>>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>>>>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>>>>  
>>>> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
>>>> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
>>>> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
>>>> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
>>>> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
>>>> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
>>>> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
>>>> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
>>>> +
>>>> +#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
>>>> +                         EFER_SVME | EFER_LMSLE | EFER_FFXSE)
>>> When meaning to clean up and consolidate these and others, why
>>> don't we switch to architectural MSR names at the same time? While
>>> this will increase source size a little, it'll
>>> - allow grep-ing for the MSRs' uses by their SDM names,
>>> - significantly reduce the risk of name clashes with something on e.g.
>>>   the arm side (EFER may not be the most risky one here, but some
>>>   of the subsequent patches certainly seem to incur such a risk).
>>>
>>> I.e. here MSR_IA32_EFER and IA32_EFER_SCE etc.
>>>
>>> Other than this I'm certainly fine in general with this cleanup.
>> Removing IA32 is a deliberate and intended properly.  The
>> non-architectural vs architectural nature of MSRs changes over time
>> meaning the names here get stale.
> But I don't think they've ever changed from IA32 to no IA32. I.e.
> once an MSR becomes architectural, we could rename it once and
> be done.

In some cases, use of the IA32 prefix is different per vendor.

>> As for grepability, most MSRs can't currently be located like that, and
>> (naming instability aside) I believe the reduction in code volume is
>> more important property to have.
> The fact that "most MSRs can't currently be located like that" is
> what I've long hoped we could overcome.

Why? Its not like you can grep the docs themselves.  If you are trying
to cross reference, going by number is far easier.

I don't see any convincing reason

>
>> There is no chance of clashing with ARM, as these are all arch-specific
>> constants.  Any common code referencing them should be fixed by becoming
>> arch-specific code.
> The risk is for this to go unnoticed for a while.

You can make that argument about any choice of naming.  Having an
unnecessarily verbose constant name doesn't alter the risk.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants
  2018-09-07 14:47         ` Andrew Cooper
@ 2018-09-07 15:09           ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-09-07 15:09 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 07.09.18 at 16:47, <andrew.cooper3@citrix.com> wrote:
> On 28/06/18 14:56, Jan Beulich wrote:
>>>>> On 28.06.18 at 15:36, <andrew.cooper3@citrix.com> wrote:
>>> On 28/06/18 14:00, Jan Beulich wrote:
>>>>>>> On 26.06.18 at 15:18, <andrew.cooper3@citrix.com> wrote:
>>>>> @@ -49,6 +28,18 @@
>>>>>  #define ARCH_CAPS_RSBA			(_AC(1, ULL) << 2)
>>>>>  #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
>>>>>  
>>>>> +#define MSR_EFER                        0xc0000080 /* Extended Feature Enable Register */
>>>>> +#define EFER_SCE                        (_AC(1, ULL) <<  0) /* SYSCALL Enable */
>>>>> +#define EFER_LME                        (_AC(1, ULL) <<  8) /* Long Mode Enable */
>>>>> +#define EFER_LMA                        (_AC(1, ULL) << 10) /* Long Mode Active */
>>>>> +#define EFER_NXE                        (_AC(1, ULL) << 11) /* No Execute Enable */
>>>>> +#define EFER_SVME                       (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
>>>>> +#define EFER_LMSLE                      (_AC(1, ULL) << 13) /* Long Mode Segment Limit Enable */
>>>>> +#define EFER_FFXSE                      (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
>>>>> +
>>>>> +#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | \
>>>>> +                         EFER_SVME | EFER_LMSLE | EFER_FFXSE)
>>>> When meaning to clean up and consolidate these and others, why
>>>> don't we switch to architectural MSR names at the same time? While
>>>> this will increase source size a little, it'll
>>>> - allow grep-ing for the MSRs' uses by their SDM names,
>>>> - significantly reduce the risk of name clashes with something on e.g.
>>>>   the arm side (EFER may not be the most risky one here, but some
>>>>   of the subsequent patches certainly seem to incur such a risk).
>>>>
>>>> I.e. here MSR_IA32_EFER and IA32_EFER_SCE etc.
>>>>
>>>> Other than this I'm certainly fine in general with this cleanup.
>>> Removing IA32 is a deliberate and intended properly.  The
>>> non-architectural vs architectural nature of MSRs changes over time
>>> meaning the names here get stale.
>> But I don't think they've ever changed from IA32 to no IA32. I.e.
>> once an MSR becomes architectural, we could rename it once and
>> be done.
> 
> In some cases, use of the IA32 prefix is different per vendor.

I guess as soon as one calls it architectural, so could we do in
our naming. (I'm btw unsure I've ever seen AMD introduce an
MSR with IA32 infix.)

>>> As for grepability, most MSRs can't currently be located like that, and
>>> (naming instability aside) I believe the reduction in code volume is
>>> more important property to have.
>> The fact that "most MSRs can't currently be located like that" is
>> what I've long hoped we could overcome.
> 
> Why? Its not like you can grep the docs themselves.  If you are trying
> to cross reference, going by number is far easier.

I can surely use the reader's "search" function.

>>> There is no chance of clashing with ARM, as these are all arch-specific
>>> constants.  Any common code referencing them should be fixed by becoming
>>> arch-specific code.
>> The risk is for this to go unnoticed for a while.
> 
> You can make that argument about any choice of naming.  Having an
> unnecessarily verbose constant name doesn't alter the risk.

Just like we use XEN_ / xen_ to separate public interface name space,
prefixing x86_ or infixing _IA32_ seems quite reasonable a thing to me
to separate x86 from ARM and common name spaces.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2018-09-07 15:09 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-26 13:18 [PATCH 0/6] x86/msr: Introductory MSR cleanup Andrew Cooper
2018-06-26 13:18 ` [PATCH 1/6] x86/msr: Clean up the MSR_EFER constants Andrew Cooper
2018-06-26 15:33   ` Wei Liu
2018-06-27 10:39   ` Roger Pau Monné
2018-06-27 10:44     ` Andrew Cooper
2018-06-28 13:00   ` Jan Beulich
2018-06-28 13:36     ` Andrew Cooper
2018-06-28 13:56       ` Jan Beulich
2018-09-07 14:47         ` Andrew Cooper
2018-09-07 15:09           ` Jan Beulich
2018-06-26 13:18 ` [PATCH 2/6] x86/msr: Cleanup of misc constants Andrew Cooper
2018-06-26 15:43   ` Wei Liu
2018-06-27 10:48   ` Roger Pau Monné
2018-06-26 13:18 ` [PATCH 3/6] x86/msr: Clean up the MSR_{PLATFORM_INFO, MISC_FEATURES_ENABLES} constants Andrew Cooper
2018-06-26 16:31   ` Wei Liu
2018-06-27 11:08   ` Roger Pau Monné
2018-06-28 13:04   ` Jan Beulich
2018-06-26 13:18 ` [PATCH 4/6] x86/msr: Clean up the MSR_FEATURE_CONTROL constants Andrew Cooper
2018-06-26 17:59   ` Andrew Cooper
2018-06-27  9:05     ` Jan Beulich
2018-06-27 11:08   ` Wei Liu
2018-06-27 11:21   ` Roger Pau Monné
2018-06-28 13:11   ` Jan Beulich
2018-07-02  5:56     ` Tian, Kevin
2018-06-26 13:18 ` [PATCH 5/6] x86/msr: Clean up the MSR_APIC_BASE constants Andrew Cooper
2018-06-27 13:26   ` Wei Liu
2018-06-27 13:32   ` Roger Pau Monné
2018-06-27 13:35     ` Andrew Cooper
2018-06-27 14:50       ` Andrew Cooper
2018-06-26 13:18 ` [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants Andrew Cooper
2018-06-27 13:26   ` Wei Liu
2018-06-27 13:50   ` Roger Pau Monné
2018-06-27 14:15     ` Andrew Cooper
2018-06-28 13:18   ` Jan Beulich
2018-06-26 18:22 ` [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers Andrew Cooper
2018-06-27 13:35   ` Wei Liu
2018-06-27 14:17   ` Roger Pau Monné
2018-06-27 14:27     ` Andrew Cooper
2018-06-28 13:26   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.