All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting
@ 2017-08-30 10:34 Sergey Dyasli
  2017-08-30 10:34 ` [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy Sergey Dyasli
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-08-30 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Jan Beulich

Currently there are the following issues with handling guest's RD/WRMSR
in Xen:

1. There is no way to configure which MSRs a guest can and can't access.
   And if there is no MSR handler in Xen for a particular MSR then
   the default behavior is just horrible:

        RDMSR: rdmsr_safe(msr, *msr_content) /* returns a H/W value */
        WRMSR: return X86EMUL_OKAY;  /* write is silently discarded */

2. There are too many handlers. Example for RDMSR:
        priv_op_read_msr()
        hvm_msr_read_intercept()
        vmce_rdmsr()
        svm_msr_read_intercept()
        vmx_msr_read_intercept()
        nvmx_msr_read_intercept()
        rdmsr_viridian_regs()
        ...

This series tries to address the above issues in the following way.
2 types of MSR policy objects are introduced:

    1. Per-Domain policy (struct msr_domain_policy) -- for shared MSRs
    2. Per-vCPU policy (struct msr_vcpu_policy) -- for unique MSRs

Each domain and each vCPU inside a domain will now have an associated
MSR policy object. Contents of these structures are defined during
domain creation. For now, it's just a copy of either HVM_MAX or PV_MAX
policy, depending on a guest's type. But in the future it should be
possible to control the availability and values in guest's MSRs from
a toolstack. However, any MSR manipulations must be done together with
CPUID ones.

Once policy objects are in place, it becomes easy to introduce unified
guest's RDMSR and WRMSR handlers. They work directly with MSR policy
objects since all the state of guest's MSRs is contained there.

Main idea of having MSR policy is to define a set and contents of MSRs
that a guest sees. All other MSRs should be inaccessible (access would
generate a GP fault). And this MSR information should also be sent in
the migration stream.

Since it's impossible to convert all MSRs to use the new infrastructure
right away, this series starts with 2 MSRs responsible for CPUID
faulting:

    1. MSR_INTEL_PLATFORM_INFO
    2. MSR_INTEL_MISC_FEATURES_ENABLES

My previous VMX MSR policy patch set will be rebased on top of this
generic MSR infrastructure after it's merged.

Sergey Dyasli (5):
  x86/msr: introduce struct msr_domain_policy
  x86/msr: introduce struct msr_vcpu_policy
  x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy
  x86/msr: introduce guest_rdmsr()
  x86/msr: introduce guest_wrmsr()

 xen/arch/x86/Makefile          |   1 +
 xen/arch/x86/cpu/intel.c       |   3 +-
 xen/arch/x86/domain.c          |  24 ++++-
 xen/arch/x86/hvm/hvm.c         |  18 +++-
 xen/arch/x86/hvm/vmx/vmx.c     |  31 -------
 xen/arch/x86/msr.c             | 203 +++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/pv/emul-inv-op.c  |   4 +-
 xen/arch/x86/pv/emul-priv-op.c |  43 ++-------
 xen/arch/x86/setup.c           |   1 +
 xen/include/asm-x86/domain.h   |   8 +-
 xen/include/asm-x86/msr.h      |  33 +++++++
 11 files changed, 292 insertions(+), 77 deletions(-)
 create mode 100644 xen/arch/x86/msr.c

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
@ 2017-08-30 10:34 ` Sergey Dyasli
  2017-09-21  1:41   ` Tian, Kevin
  2017-08-30 10:34 ` [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy Sergey Dyasli
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Sergey Dyasli @ 2017-08-30 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Jan Beulich

The new structure contains information about guest's MSRs that are
shared between all domain's vCPUs. It starts with only 1 MSR:

    MSR_INTEL_PLATFORM_INFO

Which currently has only 1 usable bit: cpuid_faulting.

Add 2 global policy objects: hvm_max and pv_max that are inited during
boot up. It's always possible to emulate CPUID faulting for HVM guests
while for PV guests the H/W support is required.

Add init_domain_msr_policy() which sets initial MSR policy during
domain creation with a special case for Dom0.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/Makefile        |  1 +
 xen/arch/x86/domain.c        |  6 +++
 xen/arch/x86/msr.c           | 95 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c         |  1 +
 xen/include/asm-x86/domain.h |  3 +-
 xen/include/asm-x86/msr.h    | 13 ++++++
 6 files changed, 118 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/x86/msr.c

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 93ead6e5dd..d5d58a205e 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -35,6 +35,7 @@ obj-y += i8259.o
 obj-y += io_apic.o
 obj-$(CONFIG_LIVEPATCH) += alternative.o livepatch.o
 obj-y += msi.o
+obj-y += msr.o
 obj-y += ioport_emulate.o
 obj-y += irq.o
 obj-$(CONFIG_KEXEC) += machine_kexec.o
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index baaf8151d2..620666b33a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -425,6 +425,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     {
         d->arch.emulation_flags = 0;
         d->arch.cpuid = ZERO_BLOCK_PTR; /* Catch stray misuses. */
+        d->arch.msr = ZERO_BLOCK_PTR;
     }
     else
     {
@@ -470,6 +471,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
         if ( (rc = init_domain_cpuid_policy(d)) )
             goto fail;
 
+        if ( (rc = init_domain_msr_policy(d)) )
+            goto fail;
+
         d->arch.ioport_caps = 
             rangeset_new(d, "I/O Ports", RANGESETF_prettyprint_hex);
         rc = -ENOMEM;
@@ -540,6 +544,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     cleanup_domain_irq_mapping(d);
     free_xenheap_page(d->shared_info);
     xfree(d->arch.cpuid);
+    xfree(d->arch.msr);
     if ( paging_initialised )
         paging_final_teardown(d);
     free_perdomain_mappings(d);
@@ -554,6 +559,7 @@ void arch_domain_destroy(struct domain *d)
 
     xfree(d->arch.e820);
     xfree(d->arch.cpuid);
+    xfree(d->arch.msr);
 
     free_domain_pirqs(d);
     if ( !is_idle_domain(d) )
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
new file mode 100644
index 0000000000..eac50ec897
--- /dev/null
+++ b/xen/arch/x86/msr.c
@@ -0,0 +1,95 @@
+/******************************************************************************
+ * arch/x86/msr.c
+ *
+ * Policy objects for Model-Specific Registers.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2017 Citrix Systems Ltd.
+ */
+
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <asm/msr.h>
+
+struct msr_domain_policy __read_mostly hvm_max_msr_domain_policy,
+                         __read_mostly  pv_max_msr_domain_policy;
+
+static void __init calculate_hvm_max_policy(void)
+{
+    struct msr_domain_policy *dp = &hvm_max_msr_domain_policy;
+
+    if ( !hvm_enabled )
+        return;
+
+    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+    {
+        dp->plaform_info.available = true;
+        dp->plaform_info.cpuid_faulting = true;
+    }
+}
+
+static void __init calculate_pv_max_policy(void)
+{
+    struct msr_domain_policy *dp = &pv_max_msr_domain_policy;
+
+    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    if ( cpu_has_cpuid_faulting )
+    {
+        dp->plaform_info.available = true;
+        dp->plaform_info.cpuid_faulting = true;
+    }
+}
+
+void __init init_guest_msr_policy(void)
+{
+    calculate_hvm_max_policy();
+    calculate_pv_max_policy();
+}
+
+int init_domain_msr_policy(struct domain *d)
+{
+    struct msr_domain_policy *dp;
+
+    dp = xmalloc(struct msr_domain_policy);
+
+    if ( !dp )
+        return -ENOMEM;
+
+    *dp = is_pv_domain(d) ? pv_max_msr_domain_policy :
+                            hvm_max_msr_domain_policy;
+
+    /* See comment in intel_ctxt_switch_levelling() */
+    if ( is_control_domain(d) )
+    {
+        dp->plaform_info.available = false;
+        dp->plaform_info.cpuid_faulting = false;
+    }
+
+    d->arch.msr = dp;
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index db5df6956d..b9a769d92c 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1560,6 +1560,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
         panic("Could not protect TXT memory regions");
 
     init_guest_cpuid();
+    init_guest_msr_policy();
 
     if ( dom0_pvh )
     {
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index c10522b7f5..f08ede3a05 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -356,8 +356,9 @@ struct arch_domain
      */
     uint8_t x87_fip_width;
 
-    /* CPUID Policy. */
+    /* CPUID and MSR policy objects. */
     struct cpuid_policy *cpuid;
+    struct msr_domain_policy *msr;
 
     struct PITState vpit;
 
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 7004b6f398..5cf7be1821 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -202,6 +202,19 @@ void write_efer(u64 val);
 
 DECLARE_PER_CPU(u32, ler_msr);
 
+/* MSR policy object for shared per-domain MSRs */
+struct msr_domain_policy
+{
+    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    struct {
+        bool available; /* This MSR is non-architectural */
+        bool cpuid_faulting;
+    } plaform_info;
+};
+
+void init_guest_msr_policy(void);
+int init_domain_msr_policy(struct domain *d);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_MSR_H */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
  2017-08-30 10:34 ` [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy Sergey Dyasli
@ 2017-08-30 10:34 ` Sergey Dyasli
  2017-09-21  1:45   ` Tian, Kevin
  2017-09-25  8:32   ` Jan Beulich
  2017-08-30 10:34 ` [PATCH v1 3/5] x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy Sergey Dyasli
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-08-30 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Jan Beulich

The new structure contains information about guest's MSRs that are
unique to each vCPU. It starts with only 1 MSR:

    MSR_INTEL_MISC_FEATURES_ENABLES

Which currently has only 1 usable bit: cpuid_faulting.

Add 2 global policy objects: hvm_max and pv_max that are inited during
boot up. Availability of MSR_INTEL_MISC_FEATURES_ENABLES depends on
availability of MSR_INTEL_PLATFORM_INFO.

Add init_vcpu_msr_policy() which sets initial MSR policy for every vCPU
during domain creation with a special case for Dom0.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/domain.c        | 18 ++++++++++++++++--
 xen/arch/x86/msr.c           | 33 +++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |  2 ++
 xen/include/asm-x86/msr.h    | 11 +++++++++++
 4 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 620666b33a..1667d2ad57 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -344,13 +344,27 @@ int vcpu_initialise(struct vcpu *v)
         /* Idle domain */
         v->arch.cr3 = __pa(idle_pg_table);
         rc = 0;
+        v->arch.msr = ZERO_BLOCK_PTR; /* Catch stray misuses */
     }
 
     if ( rc )
-        vcpu_destroy_fpu(v);
-    else if ( !is_idle_domain(v->domain) )
+        goto fail;
+
+    if ( !is_idle_domain(v->domain) )
+    {
         vpmu_initialise(v);
 
+        if ( (rc = init_vcpu_msr_policy(v)) )
+            goto fail;
+    }
+
+    return rc;
+
+ fail:
+    vcpu_destroy_fpu(v);
+    xfree(v->arch.msr);
+    v->arch.msr = NULL;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index eac50ec897..b5ad97d3c8 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -27,9 +27,13 @@
 struct msr_domain_policy __read_mostly hvm_max_msr_domain_policy,
                          __read_mostly  pv_max_msr_domain_policy;
 
+struct msr_vcpu_policy __read_mostly hvm_max_msr_vcpu_policy,
+                       __read_mostly  pv_max_msr_vcpu_policy;
+
 static void __init calculate_hvm_max_policy(void)
 {
     struct msr_domain_policy *dp = &hvm_max_msr_domain_policy;
+    struct msr_vcpu_policy *vp = &hvm_max_msr_vcpu_policy;
 
     if ( !hvm_enabled )
         return;
@@ -40,11 +44,15 @@ static void __init calculate_hvm_max_policy(void)
         dp->plaform_info.available = true;
         dp->plaform_info.cpuid_faulting = true;
     }
+
+    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
+    vp->misc_features_enables.available = dp->plaform_info.available;
 }
 
 static void __init calculate_pv_max_policy(void)
 {
     struct msr_domain_policy *dp = &pv_max_msr_domain_policy;
+    struct msr_vcpu_policy *vp = &pv_max_msr_vcpu_policy;
 
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     if ( cpu_has_cpuid_faulting )
@@ -52,6 +60,9 @@ static void __init calculate_pv_max_policy(void)
         dp->plaform_info.available = true;
         dp->plaform_info.cpuid_faulting = true;
     }
+
+    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
+    vp->misc_features_enables.available = dp->plaform_info.available;
 }
 
 void __init init_guest_msr_policy(void)
@@ -84,6 +95,28 @@ int init_domain_msr_policy(struct domain *d)
     return 0;
 }
 
+int init_vcpu_msr_policy(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct msr_vcpu_policy *vp;
+
+    vp = xmalloc(struct msr_vcpu_policy);
+
+    if ( !vp )
+        return -ENOMEM;
+
+    *vp = is_pv_domain(d) ? pv_max_msr_vcpu_policy :
+                            hvm_max_msr_vcpu_policy;
+
+    /* See comment in intel_ctxt_switch_levelling() */
+    if ( is_control_domain(d) )
+        vp->misc_features_enables.available = false;
+
+    v->arch.msr = vp;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index f08ede3a05..866a03b508 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -575,6 +575,8 @@ struct arch_vcpu
 
     struct arch_vm_event *vm_event;
 
+    struct msr_vcpu_policy *msr;
+
     struct {
         bool next_interrupt_enabled;
     } monitor;
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 5cf7be1821..7c8395b9b3 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -212,8 +212,19 @@ struct msr_domain_policy
     } plaform_info;
 };
 
+/* MSR policy object for per-vCPU MSRs */
+struct msr_vcpu_policy
+{
+    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
+    struct {
+        bool available; /* This MSR is non-architectural */
+        bool cpuid_faulting;
+    } misc_features_enables;
+};
+
 void init_guest_msr_policy(void);
 int init_domain_msr_policy(struct domain *d);
+int init_vcpu_msr_policy(struct vcpu *v);
 
 #endif /* !__ASSEMBLY__ */
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 3/5] x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
  2017-08-30 10:34 ` [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy Sergey Dyasli
  2017-08-30 10:34 ` [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy Sergey Dyasli
@ 2017-08-30 10:34 ` Sergey Dyasli
  2017-09-21  1:48   ` Tian, Kevin
  2017-08-30 10:34 ` [PATCH v1 4/5] x86/msr: introduce guest_rdmsr() Sergey Dyasli
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Sergey Dyasli @ 2017-08-30 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Jan Beulich

Since each vCPU now has struct msr_vcpu_policy, use cpuid_faulting bit
from there in current logic and remove arch_vcpu::cpuid_faulting.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/cpu/intel.c       |  3 ++-
 xen/arch/x86/hvm/hvm.c         |  4 +++-
 xen/arch/x86/hvm/vmx/vmx.c     | 10 ++++++----
 xen/arch/x86/pv/emul-inv-op.c  |  4 +++-
 xen/arch/x86/pv/emul-priv-op.c |  5 +++--
 xen/include/asm-x86/domain.h   |  3 ---
 6 files changed, 17 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 2e20327569..487eb06148 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -156,6 +156,7 @@ static void intel_ctxt_switch_levelling(const struct vcpu *next)
 	struct cpuidmasks *these_masks = &this_cpu(cpuidmasks);
 	const struct domain *nextd = next ? next->domain : NULL;
 	const struct cpuidmasks *masks;
+	const struct msr_vcpu_policy *vp = next->arch.msr;
 
 	if (cpu_has_cpuid_faulting) {
 		/*
@@ -176,7 +177,7 @@ static void intel_ctxt_switch_levelling(const struct vcpu *next)
 		 */
 		set_cpuid_faulting(nextd && !is_control_domain(nextd) &&
 				   (is_pv_domain(nextd) ||
-				    next->arch.cpuid_faulting));
+				    vp->misc_features_enables.cpuid_faulting));
 		return;
 	}
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 6cb903def5..2ad07d52bc 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3286,7 +3286,9 @@ unsigned long copy_from_user_hvm(void *to, const void *from, unsigned len)
 
 bool hvm_check_cpuid_faulting(struct vcpu *v)
 {
-    if ( !v->arch.cpuid_faulting )
+    const struct msr_vcpu_policy *vp = v->arch.msr;
+
+    if ( !vp->misc_features_enables.cpuid_faulting )
         return false;
 
     return hvm_get_cpl(v) > 0;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 67fc85b201..155fba9017 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2902,7 +2902,7 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
 
     case MSR_INTEL_MISC_FEATURES_ENABLES:
         *msr_content = 0;
-        if ( current->arch.cpuid_faulting )
+        if ( current->arch.msr->misc_features_enables.cpuid_faulting )
             *msr_content |= MSR_MISC_FEATURES_CPUID_FAULTING;
         break;
 
@@ -3134,15 +3134,17 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 
     case MSR_INTEL_MISC_FEATURES_ENABLES:
     {
-        bool old_cpuid_faulting = v->arch.cpuid_faulting;
+        struct msr_vcpu_policy *vp = v->arch.msr;
+        bool old_cpuid_faulting = vp->misc_features_enables.cpuid_faulting;
 
         if ( msr_content & ~MSR_MISC_FEATURES_CPUID_FAULTING )
             goto gp_fault;
 
-        v->arch.cpuid_faulting = msr_content & MSR_MISC_FEATURES_CPUID_FAULTING;
+        vp->misc_features_enables.cpuid_faulting =
+            msr_content & MSR_MISC_FEATURES_CPUID_FAULTING;
 
         if ( cpu_has_cpuid_faulting &&
-             (old_cpuid_faulting ^ v->arch.cpuid_faulting) )
+             (old_cpuid_faulting ^ vp->misc_features_enables.cpuid_faulting) )
             ctxt_switch_levelling(v);
         break;
     }
diff --git a/xen/arch/x86/pv/emul-inv-op.c b/xen/arch/x86/pv/emul-inv-op.c
index 415d294c53..f8944170d5 100644
--- a/xen/arch/x86/pv/emul-inv-op.c
+++ b/xen/arch/x86/pv/emul-inv-op.c
@@ -66,6 +66,7 @@ static int emulate_forced_invalid_op(struct cpu_user_regs *regs)
     char sig[5], instr[2];
     unsigned long eip, rc;
     struct cpuid_leaf res;
+    const struct msr_vcpu_policy *vp = current->arch.msr;
 
     eip = regs->rip;
 
@@ -89,7 +90,8 @@ static int emulate_forced_invalid_op(struct cpu_user_regs *regs)
         return 0;
 
     /* If cpuid faulting is enabled and CPL>0 inject a #GP in place of #UD. */
-    if ( current->arch.cpuid_faulting && !guest_kernel_mode(current, regs) )
+    if ( vp->misc_features_enables.cpuid_faulting &&
+         !guest_kernel_mode(current, regs) )
     {
         regs->rip = eip;
         pv_inject_hw_exception(TRAP_gp_fault, regs->error_code);
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index d50f51944f..66cda538fc 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -948,7 +948,7 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val,
              rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, *val) )
             break;
         *val = 0;
-        if ( curr->arch.cpuid_faulting )
+        if ( curr->arch.msr->misc_features_enables.cpuid_faulting )
             *val |= MSR_MISC_FEATURES_CPUID_FAULTING;
         return X86EMUL_OKAY;
 
@@ -1154,7 +1154,8 @@ static int priv_op_write_msr(unsigned int reg, uint64_t val,
         if ( (val & MSR_MISC_FEATURES_CPUID_FAULTING) &&
              !this_cpu(cpuid_faulting_enabled) )
             break;
-        curr->arch.cpuid_faulting = !!(val & MSR_MISC_FEATURES_CPUID_FAULTING);
+        curr->arch.msr->misc_features_enables.cpuid_faulting =
+            !!(val & MSR_MISC_FEATURES_CPUID_FAULTING);
         return X86EMUL_OKAY;
 
     case MSR_P6_PERFCTR(0) ... MSR_P6_PERFCTR(7):
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 866a03b508..60c02650d5 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -555,9 +555,6 @@ struct arch_vcpu
      * and thus should be saved/restored. */
     bool_t nonlazy_xstate_used;
 
-    /* Has the guest enabled CPUID faulting? */
-    bool cpuid_faulting;
-
     /*
      * The SMAP check policy when updating runstate_guest(v) and the
      * secondary system time.
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 4/5] x86/msr: introduce guest_rdmsr()
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
                   ` (2 preceding siblings ...)
  2017-08-30 10:34 ` [PATCH v1 3/5] x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy Sergey Dyasli
@ 2017-08-30 10:34 ` Sergey Dyasli
  2017-08-30 10:43   ` Andrew Cooper
  2017-09-21  1:53   ` Tian, Kevin
  2017-08-30 10:34 ` [PATCH v1 5/5] x86/msr: introduce guest_wrmsr() Sergey Dyasli
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-08-30 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Jan Beulich

The new function is responsible for handling RDMSR from both HVM and PV
guests. Currently it handles only 2 MSRs:

    MSR_INTEL_PLATFORM_INFO
    MSR_INTEL_MISC_FEATURES_ENABLES

It has a different behaviour compared to the old MSR handlers: if MSR
is being handled by guest_rdmsr() then RDMSR will either succeed (if
a guest is allowed to access it based on its MSR policy) or produce
a GP fault. A guest will never see a H/W value of some MSR unknown to
this function.

guest_rdmsr() unifies and replaces the handling code from
vmx_msr_read_intercept() and priv_op_read_msr().

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/hvm.c         |  7 ++++++-
 xen/arch/x86/hvm/vmx/vmx.c     | 10 ----------
 xen/arch/x86/msr.c             | 31 +++++++++++++++++++++++++++++++
 xen/arch/x86/pv/emul-priv-op.c | 22 ++++------------------
 xen/include/asm-x86/msr.h      |  8 ++++++++
 5 files changed, 49 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2ad07d52bc..ec7205ee32 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3334,11 +3334,16 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     struct vcpu *v = current;
     struct domain *d = v->domain;
     uint64_t *var_range_base, *fixed_range_base;
-    int ret = X86EMUL_OKAY;
+    int ret;
 
     var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges;
     fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges;
 
+    if ( (ret = guest_rdmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE )
+        return ret;
+    else
+        ret = X86EMUL_OKAY;
+
     switch ( msr )
     {
         unsigned int index;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 155fba9017..ac34383658 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2896,16 +2896,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             goto gp_fault;
         break;
 
-    case MSR_INTEL_PLATFORM_INFO:
-        *msr_content = MSR_PLATFORM_INFO_CPUID_FAULTING;
-        break;
-
-    case MSR_INTEL_MISC_FEATURES_ENABLES:
-        *msr_content = 0;
-        if ( current->arch.msr->misc_features_enables.cpuid_faulting )
-            *msr_content |= MSR_MISC_FEATURES_CPUID_FAULTING;
-        break;
-
     default:
         if ( passive_domain_do_rdmsr(msr, msr_content) )
             goto done;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index b5ad97d3c8..a822a132ad 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -117,6 +117,37 @@ int init_vcpu_msr_policy(struct vcpu *v)
     return 0;
 }
 
+int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
+{
+    const struct msr_domain_policy *dp = v->domain->arch.msr;
+    const struct msr_vcpu_policy *vp = v->arch.msr;
+
+    switch ( msr )
+    {
+    case MSR_INTEL_PLATFORM_INFO:
+        if ( !dp->plaform_info.available )
+            goto gp_fault;
+        *val = (uint64_t) dp->plaform_info.cpuid_faulting <<
+                   _MSR_PLATFORM_INFO_CPUID_FAULTING;
+        break;
+
+    case MSR_INTEL_MISC_FEATURES_ENABLES:
+        if ( !vp->misc_features_enables.available )
+            goto gp_fault;
+        *val = (uint64_t) vp->misc_features_enables.cpuid_faulting <<
+                   _MSR_MISC_FEATURES_CPUID_FAULTING;
+        break;
+
+    default:
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    return X86EMUL_OKAY;
+
+ gp_fault:
+    return X86EMUL_EXCEPTION;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 66cda538fc..d563214fc4 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -834,6 +834,10 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val,
     const struct vcpu *curr = current;
     const struct domain *currd = curr->domain;
     bool vpmu_msr = false;
+    int ret;
+
+    if ( (ret = guest_rdmsr(curr, reg, val)) != X86EMUL_UNHANDLEABLE )
+        return ret;
 
     switch ( reg )
     {
@@ -934,24 +938,6 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val,
         *val = 0;
         return X86EMUL_OKAY;
 
-    case MSR_INTEL_PLATFORM_INFO:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-             rdmsr_safe(MSR_INTEL_PLATFORM_INFO, *val) )
-            break;
-        *val = 0;
-        if ( this_cpu(cpuid_faulting_enabled) )
-            *val |= MSR_PLATFORM_INFO_CPUID_FAULTING;
-        return X86EMUL_OKAY;
-
-    case MSR_INTEL_MISC_FEATURES_ENABLES:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-             rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, *val) )
-            break;
-        *val = 0;
-        if ( curr->arch.msr->misc_features_enables.cpuid_faulting )
-            *val |= MSR_MISC_FEATURES_CPUID_FAULTING;
-        return X86EMUL_OKAY;
-
     case MSR_P6_PERFCTR(0) ... MSR_P6_PERFCTR(7):
     case MSR_P6_EVNTSEL(0) ... MSR_P6_EVNTSEL(3):
     case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR2:
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 7c8395b9b3..9cc505cb40 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -226,6 +226,14 @@ void init_guest_msr_policy(void);
 int init_domain_msr_policy(struct domain *d);
 int init_vcpu_msr_policy(struct vcpu *v);
 
+/*
+ * Below functions can return X86EMUL_UNHANDLEABLE which means that MSR is
+ * not (yet) handled by it and must be processed by legacy handlers. Such
+ * behaviour is needed for transition period until all rd/wrmsr are handled
+ * by the new MSR infrastructure.
+ */
+int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_MSR_H */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 5/5] x86/msr: introduce guest_wrmsr()
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
                   ` (3 preceding siblings ...)
  2017-08-30 10:34 ` [PATCH v1 4/5] x86/msr: introduce guest_rdmsr() Sergey Dyasli
@ 2017-08-30 10:34 ` Sergey Dyasli
  2017-08-30 10:50   ` Andrew Cooper
  2017-09-21  2:04   ` Tian, Kevin
  2017-08-30 10:50 ` [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Andrew Cooper
  2017-09-11  7:29 ` Sergey Dyasli
  6 siblings, 2 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-08-30 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Jan Beulich

The new function is responsible for handling WRMSR from both HVM and PV
guests. Currently it handles only 2 MSRs:

    MSR_INTEL_PLATFORM_INFO
    MSR_INTEL_MISC_FEATURES_ENABLES

It has a different behaviour compared to the old MSR handlers: if MSR
is being handled by guest_wrmsr() then WRMSR will either succeed (if
a guest is allowed to access it and provided a correct value based on
its MSR policy) or produce a GP fault. A guest will never see
a successful WRMSR of some MSR unknown to this function.

guest_wrmsr() unifies and replaces the handling code from
vmx_msr_write_intercept() and priv_op_write_msr().

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/hvm.c         |  7 ++++++-
 xen/arch/x86/hvm/vmx/vmx.c     | 23 ----------------------
 xen/arch/x86/msr.c             | 44 ++++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/pv/emul-priv-op.c | 22 ++++-----------------
 xen/include/asm-x86/msr.h      |  1 +
 5 files changed, 55 insertions(+), 42 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ec7205ee32..524f9a37c0 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3465,7 +3465,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
 {
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    int ret = X86EMUL_OKAY;
+    int ret;
 
     HVMTRACE_3D(MSR_WRITE, msr,
                (uint32_t)msr_content, (uint32_t)(msr_content >> 32));
@@ -3483,6 +3483,11 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         return X86EMUL_OKAY;
     }
 
+    if ( (ret = guest_wrmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE )
+        return ret;
+    else
+        ret = X86EMUL_OKAY;
+
     switch ( msr )
     {
         unsigned int index;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ac34383658..cea2a1ae55 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3116,29 +3116,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
             goto gp_fault;
         break;
 
-    case MSR_INTEL_PLATFORM_INFO:
-        if ( msr_content ||
-             rdmsr_safe(MSR_INTEL_PLATFORM_INFO, msr_content) )
-            goto gp_fault;
-        break;
-
-    case MSR_INTEL_MISC_FEATURES_ENABLES:
-    {
-        struct msr_vcpu_policy *vp = v->arch.msr;
-        bool old_cpuid_faulting = vp->misc_features_enables.cpuid_faulting;
-
-        if ( msr_content & ~MSR_MISC_FEATURES_CPUID_FAULTING )
-            goto gp_fault;
-
-        vp->misc_features_enables.cpuid_faulting =
-            msr_content & MSR_MISC_FEATURES_CPUID_FAULTING;
-
-        if ( cpu_has_cpuid_faulting &&
-             (old_cpuid_faulting ^ vp->misc_features_enables.cpuid_faulting) )
-            ctxt_switch_levelling(v);
-        break;
-    }
-
     default:
         if ( passive_domain_do_wrmsr(msr, msr_content) )
             return X86EMUL_OKAY;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index a822a132ad..9202a4a476 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -148,6 +148,50 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     return X86EMUL_EXCEPTION;
 }
 
+int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
+{
+    struct domain *d = v->domain;
+    struct msr_domain_policy *dp = d->arch.msr;
+    struct msr_vcpu_policy *vp = v->arch.msr;
+
+    switch ( msr )
+    {
+    case MSR_INTEL_PLATFORM_INFO:
+        goto gp_fault;
+
+    case MSR_INTEL_MISC_FEATURES_ENABLES:
+    {
+        uint64_t rsvd = ~0ull;
+        bool old_cpuid_faulting = vp->misc_features_enables.cpuid_faulting;
+
+        if ( !vp->misc_features_enables.available )
+            goto gp_fault;
+
+        if ( dp->plaform_info.cpuid_faulting )
+            rsvd &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
+
+        if ( val & rsvd )
+            goto gp_fault;
+
+        vp->misc_features_enables.cpuid_faulting =
+            val & MSR_MISC_FEATURES_CPUID_FAULTING;
+
+        if ( is_hvm_domain(d) && cpu_has_cpuid_faulting &&
+             (old_cpuid_faulting ^ vp->misc_features_enables.cpuid_faulting) )
+            ctxt_switch_levelling(v);
+        break;
+    }
+
+    default:
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    return X86EMUL_OKAY;
+
+ gp_fault:
+    return X86EMUL_EXCEPTION;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index d563214fc4..d32af7d45d 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -983,6 +983,10 @@ static int priv_op_write_msr(unsigned int reg, uint64_t val,
     struct vcpu *curr = current;
     const struct domain *currd = curr->domain;
     bool vpmu_msr = false;
+    int ret;
+
+    if ( (ret = guest_wrmsr(curr, reg, val)) != X86EMUL_UNHANDLEABLE )
+        return ret;
 
     switch ( reg )
     {
@@ -1126,24 +1130,6 @@ static int priv_op_write_msr(unsigned int reg, uint64_t val,
             wrmsrl(reg, val);
         return X86EMUL_OKAY;
 
-    case MSR_INTEL_PLATFORM_INFO:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-             val || rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val) )
-            break;
-        return X86EMUL_OKAY;
-
-    case MSR_INTEL_MISC_FEATURES_ENABLES:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
-             (val & ~MSR_MISC_FEATURES_CPUID_FAULTING) ||
-             rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, temp) )
-            break;
-        if ( (val & MSR_MISC_FEATURES_CPUID_FAULTING) &&
-             !this_cpu(cpuid_faulting_enabled) )
-            break;
-        curr->arch.msr->misc_features_enables.cpuid_faulting =
-            !!(val & MSR_MISC_FEATURES_CPUID_FAULTING);
-        return X86EMUL_OKAY;
-
     case MSR_P6_PERFCTR(0) ... MSR_P6_PERFCTR(7):
     case MSR_P6_EVNTSEL(0) ... MSR_P6_EVNTSEL(3):
     case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR2:
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 9cc505cb40..751fa25a36 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -233,6 +233,7 @@ int init_vcpu_msr_policy(struct vcpu *v);
  * by the new MSR infrastructure.
  */
 int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val);
+int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val);
 
 #endif /* !__ASSEMBLY__ */
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/5] x86/msr: introduce guest_rdmsr()
  2017-08-30 10:34 ` [PATCH v1 4/5] x86/msr: introduce guest_rdmsr() Sergey Dyasli
@ 2017-08-30 10:43   ` Andrew Cooper
  2017-09-21  1:53   ` Tian, Kevin
  1 sibling, 0 replies; 17+ messages in thread
From: Andrew Cooper @ 2017-08-30 10:43 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	George Dunlap, Ian Jackson, Tim Deegan, Jan Beulich

On 30/08/17 11:34, Sergey Dyasli wrote:
> The new function is responsible for handling RDMSR from both HVM and PV
> guests. Currently it handles only 2 MSRs:
>
>     MSR_INTEL_PLATFORM_INFO
>     MSR_INTEL_MISC_FEATURES_ENABLES
>
> It has a different behaviour compared to the old MSR handlers: if MSR
> is being handled by guest_rdmsr() then RDMSR will either succeed (if
> a guest is allowed to access it based on its MSR policy) or produce
> a GP fault. A guest will never see a H/W value of some MSR unknown to
> this function.
>
> guest_rdmsr() unifies and replaces the handling code from
> vmx_msr_read_intercept() and priv_op_read_msr().
>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

You should also note that this (along with the prep work in
init_domain_msr_policy()) fixes a bug where dom0 could probe and find
CPUID faulting, even though it couldn't actually use it.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 5/5] x86/msr: introduce guest_wrmsr()
  2017-08-30 10:34 ` [PATCH v1 5/5] x86/msr: introduce guest_wrmsr() Sergey Dyasli
@ 2017-08-30 10:50   ` Andrew Cooper
  2017-09-21  2:04   ` Tian, Kevin
  1 sibling, 0 replies; 17+ messages in thread
From: Andrew Cooper @ 2017-08-30 10:50 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	George Dunlap, Ian Jackson, Tim Deegan, Jan Beulich

On 30/08/17 11:34, Sergey Dyasli wrote:
> The new function is responsible for handling WRMSR from both HVM and PV
> guests. Currently it handles only 2 MSRs:
>
>     MSR_INTEL_PLATFORM_INFO
>     MSR_INTEL_MISC_FEATURES_ENABLES
>
> It has a different behaviour compared to the old MSR handlers: if MSR
> is being handled by guest_wrmsr() then WRMSR will either succeed (if
> a guest is allowed to access it and provided a correct value based on
> its MSR policy) or produce a GP fault. A guest will never see
> a successful WRMSR of some MSR unknown to this function.
>
> guest_wrmsr() unifies and replaces the handling code from
> vmx_msr_write_intercept() and priv_op_write_msr().
>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

You should also note that this fixes a bug on AMD hardware where a guest
which tries to enable CPUID faulting via a direct write to
MSR_INTEL_MISC_FEATURES_ENABLES will observe it appearing to succeed,
but because Xen actually ignored the write.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
                   ` (4 preceding siblings ...)
  2017-08-30 10:34 ` [PATCH v1 5/5] x86/msr: introduce guest_wrmsr() Sergey Dyasli
@ 2017-08-30 10:50 ` Andrew Cooper
  2017-09-11  7:29 ` Sergey Dyasli
  6 siblings, 0 replies; 17+ messages in thread
From: Andrew Cooper @ 2017-08-30 10:50 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	George Dunlap, Ian Jackson, Tim Deegan, Jan Beulich

On 30/08/17 11:34, Sergey Dyasli wrote:
> Sergey Dyasli (5):
>   x86/msr: introduce struct msr_domain_policy
>   x86/msr: introduce struct msr_vcpu_policy
>   x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy
>   x86/msr: introduce guest_rdmsr()
>   x86/msr: introduce guest_wrmsr()

All Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting
  2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
                   ` (5 preceding siblings ...)
  2017-08-30 10:50 ` [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Andrew Cooper
@ 2017-09-11  7:29 ` Sergey Dyasli
  2017-09-11  9:57   ` Jan Beulich
  6 siblings, 1 reply; 17+ messages in thread
From: Sergey Dyasli @ 2017-09-11  7:29 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, sstabellini, Wei Liu, jun.nakajima,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, jbeulich, Ian Jackson

Ping?

On Wed, 2017-08-30 at 11:34 +0100, Sergey Dyasli wrote:
> Currently there are the following issues with handling guest's RD/WRMSR
> in Xen:
> 
> 1. There is no way to configure which MSRs a guest can and can't access.
>    And if there is no MSR handler in Xen for a particular MSR then
>    the default behavior is just horrible:
> 
>         RDMSR: rdmsr_safe(msr, *msr_content) /* returns a H/W value */
>         WRMSR: return X86EMUL_OKAY;  /* write is silently discarded */
> 
> 2. There are too many handlers. Example for RDMSR:
>         priv_op_read_msr()
>         hvm_msr_read_intercept()
>         vmce_rdmsr()
>         svm_msr_read_intercept()
>         vmx_msr_read_intercept()
>         nvmx_msr_read_intercept()
>         rdmsr_viridian_regs()
>         ...
> 
> This series tries to address the above issues in the following way.
> 2 types of MSR policy objects are introduced:
> 
>     1. Per-Domain policy (struct msr_domain_policy) -- for shared MSRs
>     2. Per-vCPU policy (struct msr_vcpu_policy) -- for unique MSRs
> 
> Each domain and each vCPU inside a domain will now have an associated
> MSR policy object. Contents of these structures are defined during
> domain creation. For now, it's just a copy of either HVM_MAX or PV_MAX
> policy, depending on a guest's type. But in the future it should be
> possible to control the availability and values in guest's MSRs from
> a toolstack. However, any MSR manipulations must be done together with
> CPUID ones.
> 
> Once policy objects are in place, it becomes easy to introduce unified
> guest's RDMSR and WRMSR handlers. They work directly with MSR policy
> objects since all the state of guest's MSRs is contained there.
> 
> Main idea of having MSR policy is to define a set and contents of MSRs
> that a guest sees. All other MSRs should be inaccessible (access would
> generate a GP fault). And this MSR information should also be sent in
> the migration stream.
> 
> Since it's impossible to convert all MSRs to use the new infrastructure
> right away, this series starts with 2 MSRs responsible for CPUID
> faulting:
> 
>     1. MSR_INTEL_PLATFORM_INFO
>     2. MSR_INTEL_MISC_FEATURES_ENABLES
> 
> My previous VMX MSR policy patch set will be rebased on top of this
> generic MSR infrastructure after it's merged.
> 
> Sergey Dyasli (5):
>   x86/msr: introduce struct msr_domain_policy
>   x86/msr: introduce struct msr_vcpu_policy
>   x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy
>   x86/msr: introduce guest_rdmsr()
>   x86/msr: introduce guest_wrmsr()
> 
>  xen/arch/x86/Makefile          |   1 +
>  xen/arch/x86/cpu/intel.c       |   3 +-
>  xen/arch/x86/domain.c          |  24 ++++-
>  xen/arch/x86/hvm/hvm.c         |  18 +++-
>  xen/arch/x86/hvm/vmx/vmx.c     |  31 -------
>  xen/arch/x86/msr.c             | 203 +++++++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/pv/emul-inv-op.c  |   4 +-
>  xen/arch/x86/pv/emul-priv-op.c |  43 ++-------
>  xen/arch/x86/setup.c           |   1 +
>  xen/include/asm-x86/domain.h   |   8 +-
>  xen/include/asm-x86/msr.h      |  33 +++++++
>  11 files changed, 292 insertions(+), 77 deletions(-)
>  create mode 100644 xen/arch/x86/msr.c
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting
  2017-09-11  7:29 ` Sergey Dyasli
@ 2017-09-11  9:57   ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2017-09-11  9:57 UTC (permalink / raw)
  To: Sergey Dyasli
  Cc: Kevin Tian, sstabellini, Wei Liu, Andrew Cooper, Tim (Xen.org),
	George Dunlap, xen-devel, jun.nakajima, Ian Jackson

>>> On 11.09.17 at 09:29, <sergey.dyasli@citrix.com> wrote:
> Ping?

From a purely formal perspective, with Andrew's review there's only
an ack missing for the VMX changes afaict (and I think Andrew did ask
for one or two description changes, which might be best if you carried
them out and re-sent). While I would like to look over it, it's sitting
next to some 70 other patches in my to-be-reviewed queue, so I can't
make predictions when I might get to it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy
  2017-08-30 10:34 ` [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy Sergey Dyasli
@ 2017-09-21  1:41   ` Tian, Kevin
  0 siblings, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-09-21  1:41 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Nakajima, Jun, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Jan Beulich

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Wednesday, August 30, 2017 6:34 PM
> 
> The new structure contains information about guest's MSRs that are
> shared between all domain's vCPUs. It starts with only 1 MSR:
> 
>     MSR_INTEL_PLATFORM_INFO
> 
> Which currently has only 1 usable bit: cpuid_faulting.
> 
> Add 2 global policy objects: hvm_max and pv_max that are inited during
> boot up. It's always possible to emulate CPUID faulting for HVM guests
> while for PV guests the H/W support is required.
> 
> Add init_domain_msr_policy() which sets initial MSR policy during
> domain creation with a special case for Dom0.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> ---
>  xen/arch/x86/Makefile        |  1 +
>  xen/arch/x86/domain.c        |  6 +++
>  xen/arch/x86/msr.c           | 95
> ++++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/setup.c         |  1 +
>  xen/include/asm-x86/domain.h |  3 +-
>  xen/include/asm-x86/msr.h    | 13 ++++++
>  6 files changed, 118 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/x86/msr.c
> 
> diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
> index 93ead6e5dd..d5d58a205e 100644
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -35,6 +35,7 @@ obj-y += i8259.o
>  obj-y += io_apic.o
>  obj-$(CONFIG_LIVEPATCH) += alternative.o livepatch.o
>  obj-y += msi.o
> +obj-y += msr.o
>  obj-y += ioport_emulate.o
>  obj-y += irq.o
>  obj-$(CONFIG_KEXEC) += machine_kexec.o
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index baaf8151d2..620666b33a 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -425,6 +425,7 @@ int arch_domain_create(struct domain *d,
> unsigned int domcr_flags,
>      {
>          d->arch.emulation_flags = 0;
>          d->arch.cpuid = ZERO_BLOCK_PTR; /* Catch stray misuses. */
> +        d->arch.msr = ZERO_BLOCK_PTR;
>      }
>      else
>      {
> @@ -470,6 +471,9 @@ int arch_domain_create(struct domain *d,
> unsigned int domcr_flags,
>          if ( (rc = init_domain_cpuid_policy(d)) )
>              goto fail;
> 
> +        if ( (rc = init_domain_msr_policy(d)) )
> +            goto fail;
> +
>          d->arch.ioport_caps =
>              rangeset_new(d, "I/O Ports", RANGESETF_prettyprint_hex);
>          rc = -ENOMEM;
> @@ -540,6 +544,7 @@ int arch_domain_create(struct domain *d,
> unsigned int domcr_flags,
>      cleanup_domain_irq_mapping(d);
>      free_xenheap_page(d->shared_info);
>      xfree(d->arch.cpuid);
> +    xfree(d->arch.msr);
>      if ( paging_initialised )
>          paging_final_teardown(d);
>      free_perdomain_mappings(d);
> @@ -554,6 +559,7 @@ void arch_domain_destroy(struct domain *d)
> 
>      xfree(d->arch.e820);
>      xfree(d->arch.cpuid);
> +    xfree(d->arch.msr);
> 
>      free_domain_pirqs(d);
>      if ( !is_idle_domain(d) )
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> new file mode 100644
> index 0000000000..eac50ec897
> --- /dev/null
> +++ b/xen/arch/x86/msr.c
> @@ -0,0 +1,95 @@
> +/************************************************************
> ******************
> + * arch/x86/msr.c
> + *
> + * Policy objects for Model-Specific Registers.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + *
> + * Copyright (c) 2017 Citrix Systems Ltd.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/lib.h>
> +#include <xen/sched.h>
> +#include <asm/msr.h>
> +
> +struct msr_domain_policy __read_mostly hvm_max_msr_domain_policy,
> +                         __read_mostly  pv_max_msr_domain_policy;
> +
> +static void __init calculate_hvm_max_policy(void)
> +{
> +    struct msr_domain_policy *dp = &hvm_max_msr_domain_policy;
> +
> +    if ( !hvm_enabled )
> +        return;
> +
> +    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */

not needed since the code is straightforward?

> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
> +    {
> +        dp->plaform_info.available = true;
> +        dp->plaform_info.cpuid_faulting = true;
> +    }
> +}
> +
> +static void __init calculate_pv_max_policy(void)
> +{
> +    struct msr_domain_policy *dp = &pv_max_msr_domain_policy;
> +
> +    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
> +    if ( cpu_has_cpuid_faulting )
> +    {
> +        dp->plaform_info.available = true;
> +        dp->plaform_info.cpuid_faulting = true;
> +    }
> +}
> +
> +void __init init_guest_msr_policy(void)
> +{
> +    calculate_hvm_max_policy();
> +    calculate_pv_max_policy();
> +}
> +
> +int init_domain_msr_policy(struct domain *d)
> +{
> +    struct msr_domain_policy *dp;
> +
> +    dp = xmalloc(struct msr_domain_policy);
> +
> +    if ( !dp )
> +        return -ENOMEM;
> +
> +    *dp = is_pv_domain(d) ? pv_max_msr_domain_policy :
> +                            hvm_max_msr_domain_policy;
> +
> +    /* See comment in intel_ctxt_switch_levelling() */

I'd prefer to a brief description here, e.g. "CPUID faulting
has to be disabled for control domain due to the limitation
described in intel_ctxt_switch_levelling()"

> +    if ( is_control_domain(d) )
> +    {
> +        dp->plaform_info.available = false;
> +        dp->plaform_info.cpuid_faulting = false;
> +    }
> +
> +    d->arch.msr = dp;
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index db5df6956d..b9a769d92c 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1560,6 +1560,7 @@ void __init noreturn __start_xen(unsigned long
> mbi_p)
>          panic("Could not protect TXT memory regions");
> 
>      init_guest_cpuid();
> +    init_guest_msr_policy();
> 
>      if ( dom0_pvh )
>      {
> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-
> x86/domain.h
> index c10522b7f5..f08ede3a05 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -356,8 +356,9 @@ struct arch_domain
>       */
>      uint8_t x87_fip_width;
> 
> -    /* CPUID Policy. */
> +    /* CPUID and MSR policy objects. */
>      struct cpuid_policy *cpuid;
> +    struct msr_domain_policy *msr;
> 
>      struct PITState vpit;
> 
> diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
> index 7004b6f398..5cf7be1821 100644
> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -202,6 +202,19 @@ void write_efer(u64 val);
> 
>  DECLARE_PER_CPU(u32, ler_msr);
> 
> +/* MSR policy object for shared per-domain MSRs */
> +struct msr_domain_policy
> +{
> +    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
> +    struct {
> +        bool available; /* This MSR is non-architectural */

does "non-architectural" matter in future policy control? If not
no need to mention it.

> +        bool cpuid_faulting;
> +    } plaform_info;
> +};
> +
> +void init_guest_msr_policy(void);
> +int init_domain_msr_policy(struct domain *d);
> +
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif /* __ASM_MSR_H */
> --
> 2.11.0

Above are all cosmetic comments. Here is my:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy
  2017-08-30 10:34 ` [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy Sergey Dyasli
@ 2017-09-21  1:45   ` Tian, Kevin
  2017-09-25  8:32   ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-09-21  1:45 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Nakajima, Jun, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Jan Beulich

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Wednesday, August 30, 2017 6:35 PM
> 
> The new structure contains information about guest's MSRs that are
> unique to each vCPU. It starts with only 1 MSR:
> 
>     MSR_INTEL_MISC_FEATURES_ENABLES
> 
> Which currently has only 1 usable bit: cpuid_faulting.
> 
> Add 2 global policy objects: hvm_max and pv_max that are inited during
> boot up. Availability of MSR_INTEL_MISC_FEATURES_ENABLES depends on
> availability of MSR_INTEL_PLATFORM_INFO.
> 
> Add init_vcpu_msr_policy() which sets initial MSR policy for every vCPU
> during domain creation with a special case for Dom0.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

with similar cosmetic comments:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 3/5] x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy
  2017-08-30 10:34 ` [PATCH v1 3/5] x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy Sergey Dyasli
@ 2017-09-21  1:48   ` Tian, Kevin
  0 siblings, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-09-21  1:48 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Nakajima, Jun, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Jan Beulich

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Wednesday, August 30, 2017 6:35 PM
> 
> Since each vCPU now has struct msr_vcpu_policy, use cpuid_faulting bit
> from there in current logic and remove arch_vcpu::cpuid_faulting.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/5] x86/msr: introduce guest_rdmsr()
  2017-08-30 10:34 ` [PATCH v1 4/5] x86/msr: introduce guest_rdmsr() Sergey Dyasli
  2017-08-30 10:43   ` Andrew Cooper
@ 2017-09-21  1:53   ` Tian, Kevin
  1 sibling, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-09-21  1:53 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Nakajima, Jun, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Jan Beulich

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Wednesday, August 30, 2017 6:35 PM
> 
> The new function is responsible for handling RDMSR from both HVM and
> PV
> guests. Currently it handles only 2 MSRs:
> 
>     MSR_INTEL_PLATFORM_INFO
>     MSR_INTEL_MISC_FEATURES_ENABLES
> 
> It has a different behaviour compared to the old MSR handlers: if MSR
> is being handled by guest_rdmsr() then RDMSR will either succeed (if
> a guest is allowed to access it based on its MSR policy) or produce
> a GP fault. A guest will never see a H/W value of some MSR unknown to
> this function.
> 
> guest_rdmsr() unifies and replaces the handling code from
> vmx_msr_read_intercept() and priv_op_read_msr().
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>, with a small comment:

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3334,11 +3334,16 @@ int hvm_msr_read_intercept(unsigned int msr,
> uint64_t *msr_content)
>      struct vcpu *v = current;
>      struct domain *d = v->domain;
>      uint64_t *var_range_base, *fixed_range_base;
> -    int ret = X86EMUL_OKAY;
> +    int ret;
> 
>      var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges;
>      fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges;
> 
> +    if ( (ret = guest_rdmsr(v, msr, msr_content)) !=
> X86EMUL_UNHANDLEABLE )
> +        return ret;
> +    else
> +        ret = X86EMUL_OKAY;
> +

no need to add 'else' here.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 5/5] x86/msr: introduce guest_wrmsr()
  2017-08-30 10:34 ` [PATCH v1 5/5] x86/msr: introduce guest_wrmsr() Sergey Dyasli
  2017-08-30 10:50   ` Andrew Cooper
@ 2017-09-21  2:04   ` Tian, Kevin
  1 sibling, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-09-21  2:04 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Nakajima, Jun, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Jan Beulich

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Wednesday, August 30, 2017 6:35 PM
> 
> The new function is responsible for handling WRMSR from both HVM and
> PV
> guests. Currently it handles only 2 MSRs:
> 
>     MSR_INTEL_PLATFORM_INFO
>     MSR_INTEL_MISC_FEATURES_ENABLES
> 
> It has a different behaviour compared to the old MSR handlers: if MSR
> is being handled by guest_wrmsr() then WRMSR will either succeed (if
> a guest is allowed to access it and provided a correct value based on
> its MSR policy) or produce a GP fault. A guest will never see
> a successful WRMSR of some MSR unknown to this function.
> 
> guest_wrmsr() unifies and replaces the handling code from
> vmx_msr_write_intercept() and priv_op_write_msr().
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy
  2017-08-30 10:34 ` [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy Sergey Dyasli
  2017-09-21  1:45   ` Tian, Kevin
@ 2017-09-25  8:32   ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2017-09-25  8:32 UTC (permalink / raw)
  To: Sergey Dyasli
  Cc: TimDeegan, Kevin Tian, Stefano Stabellini, Wei Liu,
	George Dunlap, Andrew Cooper, IanJackson, xen-devel,
	Jun Nakajima

>>> On 30.08.17 at 12:34, <sergey.dyasli@citrix.com> wrote:
> @@ -40,11 +44,15 @@ static void __init calculate_hvm_max_policy(void)
>          dp->plaform_info.available = true;
>          dp->plaform_info.cpuid_faulting = true;
>      }
> +
> +    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
> +    vp->misc_features_enables.available = dp->plaform_info.available;
>  }
>  
>  static void __init calculate_pv_max_policy(void)
>  {
>      struct msr_domain_policy *dp = &pv_max_msr_domain_policy;
> +    struct msr_vcpu_policy *vp = &pv_max_msr_vcpu_policy;
>  
>      /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>      if ( cpu_has_cpuid_faulting )
> @@ -52,6 +60,9 @@ static void __init calculate_pv_max_policy(void)
>          dp->plaform_info.available = true;
>          dp->plaform_info.cpuid_faulting = true;
>      }
> +
> +    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
> +    vp->misc_features_enables.available = dp->plaform_info.available;
>  }

The similarity of the two changes makes me wonder whether
down the road it wouldn't be better to have a function doing
things which are uniform between HVM and PV.

> @@ -84,6 +95,28 @@ int init_domain_msr_policy(struct domain *d)
>      return 0;
>  }
>  
> +int init_vcpu_msr_policy(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    struct msr_vcpu_policy *vp;
> +
> +    vp = xmalloc(struct msr_vcpu_policy);
> +
> +    if ( !vp )
> +        return -ENOMEM;
> +
> +    *vp = is_pv_domain(d) ? pv_max_msr_vcpu_policy :
> +                            hvm_max_msr_vcpu_policy;
> +
> +    /* See comment in intel_ctxt_switch_levelling() */
> +    if ( is_control_domain(d) )
> +        vp->misc_features_enables.available = false;

It's CPUID faulting that's meant to be suppressed, not the MSR's
presence.

> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -212,8 +212,19 @@ struct msr_domain_policy
>      } plaform_info;
>  };
>  
> +/* MSR policy object for per-vCPU MSRs */
> +struct msr_vcpu_policy
> +{
> +    /* 0x00000140  MSR_INTEL_MISC_FEATURES_ENABLES */
> +    struct {
> +        bool available; /* This MSR is non-architectural */
> +        bool cpuid_faulting;

Here as well as in patch 1 I think down the road we want these
to be single-bit bitfields, to limit the growth of the structure.

In any event, none of the comments are meant to prevent the
series from going in as-is - in fact I'm preparing to commit it as
I'm going over the patches. Follow-up changes for some of the
items pointed out would be nice post-4.10.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-09-25  8:32 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-30 10:34 [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Sergey Dyasli
2017-08-30 10:34 ` [PATCH v1 1/5] x86/msr: introduce struct msr_domain_policy Sergey Dyasli
2017-09-21  1:41   ` Tian, Kevin
2017-08-30 10:34 ` [PATCH v1 2/5] x86/msr: introduce struct msr_vcpu_policy Sergey Dyasli
2017-09-21  1:45   ` Tian, Kevin
2017-09-25  8:32   ` Jan Beulich
2017-08-30 10:34 ` [PATCH v1 3/5] x86: replace arch_vcpu::cpuid_faulting with msr_vcpu_policy Sergey Dyasli
2017-09-21  1:48   ` Tian, Kevin
2017-08-30 10:34 ` [PATCH v1 4/5] x86/msr: introduce guest_rdmsr() Sergey Dyasli
2017-08-30 10:43   ` Andrew Cooper
2017-09-21  1:53   ` Tian, Kevin
2017-08-30 10:34 ` [PATCH v1 5/5] x86/msr: introduce guest_wrmsr() Sergey Dyasli
2017-08-30 10:50   ` Andrew Cooper
2017-09-21  2:04   ` Tian, Kevin
2017-08-30 10:50 ` [PATCH v1 0/5] Generic MSR policy: infrastructure + cpuid_faulting Andrew Cooper
2017-09-11  7:29 ` Sergey Dyasli
2017-09-11  9:57   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.