* [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1
@ 2017-10-13 12:35 Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 1/6] x86/msr: add Raw and Host domain policies Sergey Dyasli
` (5 more replies)
0 siblings, 6 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
The end goal of having VMX MSRs policy is to be able to manage
L1 VMX features. This patch series is the first part of this work.
There is no functional change to what L1 sees in VMX MSRs at this
point. But each domain will have a policy object which allows to
sensibly query what VMX features the domain has. This will unblock
some other nested virtualization work items.
Currently, when nested virt is enabled, the set of L1 VMX features
is fixed and calculated by nvmx_msr_read_intercept() as an intersection
between the full set of Xen's supported L1 VMX features, the set of
actual H/W features and, for MSR_IA32_VMX_EPT_VPID_CAP, the set of
features that Xen uses.
The above makes L1 VMX feature set inconsistent between different H/W
and there is no ability to control what features are available to L1.
The overall set of issues has much in common with CPUID policy.
Part 1 adds VMX MSRs into struct msr_domain_policy and initializes them
during domain creation based on CPUID policy. In the future it should be
possible to independently configure values of VMX MSRs for each domain.
v2 --> v3:
- Rebase on top of Generic MSR Policy
- Each VMX MSR now has its own availability flag
- VMX MSRs are now completely defined during domain creation
(all CPUID policy changes are taken into account)
Sergey Dyasli (6):
x86/msr: add Raw and Host domain policies
x86/msr: add VMX MSRs into struct msr_domain_policy
x86/msr: read VMX MSRs values into Raw policy
x86/msr: add VMX MSRs into HVM_max domain policy
x86/msr: update domain policy on CPUID policy changes
x86/msr: handle VMX MSRs with guest_rd/wrmsr()
xen/arch/x86/domctl.c | 1 +
xen/arch/x86/hvm/hvm.c | 8 +-
xen/arch/x86/hvm/vmx/vmx.c | 6 -
xen/arch/x86/hvm/vmx/vvmx.c | 178 ----------------
xen/arch/x86/msr.c | 387 +++++++++++++++++++++++++++++++++-
xen/include/asm-x86/hvm/hvm.h | 1 +
xen/include/asm-x86/hvm/vmx/vvmx.h | 2 -
xen/include/asm-x86/msr.h | 417 +++++++++++++++++++++++++++++++++++++
8 files changed, 811 insertions(+), 189 deletions(-)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v3 1/6] x86/msr: add Raw and Host domain policies
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
@ 2017-10-13 12:35 ` Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy Sergey Dyasli
` (4 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
Raw policy contains the actual values from H/W MSRs. PLATFORM_INFO msr
needs to be read again because probe_intel_cpuid_faulting() records
the presence of X86_FEATURE_CPUID_FAULTING but not the presence of msr
itself (if cpuid faulting is not available).
Host policy might have certain features disabled if Xen decides not
to use them. For now, make Host policy equal to Raw policy.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
xen/arch/x86/msr.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index baba44f43d..9737ed706e 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -24,12 +24,34 @@
#include <xen/sched.h>
#include <asm/msr.h>
-struct msr_domain_policy __read_mostly hvm_max_msr_domain_policy,
+struct msr_domain_policy __read_mostly raw_msr_domain_policy,
+ __read_mostly host_msr_domain_policy,
+ __read_mostly hvm_max_msr_domain_policy,
__read_mostly pv_max_msr_domain_policy;
struct msr_vcpu_policy __read_mostly hvm_max_msr_vcpu_policy,
__read_mostly pv_max_msr_vcpu_policy;
+static void __init calculate_raw_policy(void)
+{
+ struct msr_domain_policy *dp = &raw_msr_domain_policy;
+ uint64_t val;
+
+ if ( rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val) == 0 )
+ {
+ dp->plaform_info.available = true;
+ if ( val & MSR_PLATFORM_INFO_CPUID_FAULTING )
+ dp->plaform_info.cpuid_faulting = true;
+ }
+}
+
+static void __init calculate_host_policy(void)
+{
+ struct msr_domain_policy *dp = &host_msr_domain_policy;
+
+ *dp = raw_msr_domain_policy;
+}
+
static void __init calculate_hvm_max_policy(void)
{
struct msr_domain_policy *dp = &hvm_max_msr_domain_policy;
@@ -67,6 +89,8 @@ static void __init calculate_pv_max_policy(void)
void __init init_guest_msr_policy(void)
{
+ calculate_raw_policy();
+ calculate_host_policy();
calculate_hvm_max_policy();
calculate_pv_max_policy();
}
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 1/6] x86/msr: add Raw and Host domain policies Sergey Dyasli
@ 2017-10-13 12:35 ` Sergey Dyasli
2017-10-13 15:16 ` Andrew Cooper
2017-10-13 12:35 ` [PATCH v3 3/6] x86/msr: read VMX MSRs values into Raw policy Sergey Dyasli
` (3 subsequent siblings)
5 siblings, 1 reply; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
New definitions provide a convenient way of accessing contents of
VMX MSRs: every bit value is accessible by its name and there is a
"raw" 64-bit msr value. Bit names match existing Xen's definitions
as close as possible.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
xen/arch/x86/msr.c | 42 +++++
xen/include/asm-x86/msr.h | 414 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 456 insertions(+)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 9737ed706e..24029a2ac1 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -216,6 +216,48 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
return X86EMUL_EXCEPTION;
}
+static void __init __maybe_unused build_assertions(void)
+{
+ struct msr_domain_policy p;
+
+ BUILD_BUG_ON(sizeof(p.vmx_basic.u) !=
+ sizeof(p.vmx_basic.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_pinbased_ctls.u) !=
+ sizeof(p.vmx_pinbased_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_procbased_ctls.u) !=
+ sizeof(p.vmx_procbased_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_exit_ctls.u) !=
+ sizeof(p.vmx_exit_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_entry_ctls.u) !=
+ sizeof(p.vmx_entry_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_misc.u) !=
+ sizeof(p.vmx_misc.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_cr0_fixed0.u) !=
+ sizeof(p.vmx_cr0_fixed0.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_cr0_fixed1.u) !=
+ sizeof(p.vmx_cr0_fixed1.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_cr4_fixed0.u) !=
+ sizeof(p.vmx_cr4_fixed0.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_cr4_fixed1.u) !=
+ sizeof(p.vmx_cr4_fixed1.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_vmcs_enum.u) !=
+ sizeof(p.vmx_vmcs_enum.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_procbased_ctls2.u) !=
+ sizeof(p.vmx_procbased_ctls2.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_ept_vpid_cap.u) !=
+ sizeof(p.vmx_ept_vpid_cap.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_true_pinbased_ctls.u) !=
+ sizeof(p.vmx_true_pinbased_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_true_procbased_ctls.u) !=
+ sizeof(p.vmx_true_procbased_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_true_exit_ctls.u) !=
+ sizeof(p.vmx_true_exit_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_true_entry_ctls.u) !=
+ sizeof(p.vmx_true_entry_ctls.u.raw));
+ BUILD_BUG_ON(sizeof(p.vmx_vmfunc.u) !=
+ sizeof(p.vmx_vmfunc.u.raw));
+}
+
/*
* Local variables:
* mode: C
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 751fa25a36..fc99612cca 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -202,6 +202,171 @@ void write_efer(u64 val);
DECLARE_PER_CPU(u32, ler_msr);
+union vmx_pin_based_exec_control_bits {
+ uint32_t raw;
+ struct {
+ bool ext_intr_exiting:1;
+ uint32_t :2; /* 1:2 reserved */
+ bool nmi_exiting:1;
+ uint32_t :1; /* 4 reserved */
+ bool virtual_nmis:1;
+ bool preempt_timer:1;
+ bool posted_interrupt:1;
+ uint32_t :24; /* 8:31 reserved */
+ };
+};
+
+union vmx_cpu_based_exec_control_bits {
+ uint32_t raw;
+ struct {
+ uint32_t :2; /* 0:1 reserved */
+ bool virtual_intr_pending:1;
+ bool use_tsc_offseting:1;
+ uint32_t :3; /* 4:6 reserved */
+ bool hlt_exiting:1;
+ uint32_t :1; /* 8 reserved */
+ bool invlpg_exiting:1;
+ bool mwait_exiting:1;
+ bool rdpmc_exiting:1;
+ bool rdtsc_exiting:1;
+ uint32_t :2; /* 13:14 reserved */
+ bool cr3_load_exiting:1;
+ bool cr3_store_exiting:1;
+ uint32_t :2; /* 17:18 reserved */
+ bool cr8_load_exiting:1;
+ bool cr8_store_exiting:1;
+ bool tpr_shadow:1;
+ bool virtual_nmi_pending:1;
+ bool mov_dr_exiting:1;
+ bool uncond_io_exiting:1;
+ bool activate_io_bitmap:1;
+ uint32_t :1; /* 26 reserved */
+ bool monitor_trap_flag:1;
+ bool activate_msr_bitmap:1;
+ bool monitor_exiting:1;
+ bool pause_exiting:1;
+ bool activate_secondary_controls:1;
+ };
+};
+
+union vmx_vmexit_control_bits {
+ uint32_t raw;
+ struct {
+ uint32_t :2; /* 0:1 reserved */
+ bool save_debug_cntrls:1;
+ uint32_t :6; /* 3:8 reserved */
+ bool ia32e_mode:1;
+ uint32_t :2; /* 10:11 reserved */
+ bool load_perf_global_ctrl:1;
+ uint32_t :2; /* 13:14 reserved */
+ bool ack_intr_on_exit:1;
+ uint32_t :2; /* 16:17 reserved */
+ bool save_guest_pat:1;
+ bool load_host_pat:1;
+ bool save_guest_efer:1;
+ bool load_host_efer:1;
+ bool save_preempt_timer:1;
+ bool clear_bndcfgs:1;
+ bool conceal_vmexits_from_pt:1;
+ uint32_t :7; /* 25:31 reserved */
+ };
+};
+
+union vmx_vmentry_control_bits {
+ uint32_t raw;
+ struct {
+ uint32_t :2; /* 0:1 reserved */
+ bool load_debug_cntrls:1;
+ uint32_t :6; /* 3:8 reserved */
+ bool ia32e_mode:1;
+ bool smm:1;
+ bool deact_dual_monitor:1;
+ uint32_t :1; /* 12 reserved */
+ bool load_perf_global_ctrl:1;
+ bool load_guest_pat:1;
+ bool load_guest_efer:1;
+ bool load_bndcfgs:1;
+ bool conceal_vmentries_from_pt:1;
+ uint32_t :14; /* 18:31 reserved */
+ };
+};
+
+union vmx_secondary_exec_control_bits {
+ uint32_t raw;
+ struct {
+ bool virtualize_apic_accesses:1;
+ bool enable_ept:1;
+ bool descriptor_table_exiting:1;
+ bool enable_rdtscp:1;
+ bool virtualize_x2apic_mode:1;
+ bool enable_vpid:1;
+ bool wbinvd_exiting:1;
+ bool unrestricted_guest:1;
+ bool apic_register_virt:1;
+ bool virtual_intr_delivery:1;
+ bool pause_loop_exiting:1;
+ bool rdrand_exiting:1;
+ bool enable_invpcid:1;
+ bool enable_vm_functions:1;
+ bool enable_vmcs_shadowing:1;
+ bool encls_exiting:1;
+ bool rdseed_exiting:1;
+ bool enable_pml:1;
+ bool enable_virt_exceptions:1;
+ bool conceal_vmx_nonroot_from_pt:1;
+ bool xsaves:1;
+ uint32_t :1; /* 21 reserved */
+ bool ept_mode_based_exec_cntrl:1;
+ uint32_t :2; /* 23:24 reserved */
+ bool tsc_scaling:1;
+ uint32_t :6; /* 26:31 reserved */
+ };
+};
+
+struct cr0_bits {
+ bool pe:1;
+ bool mp:1;
+ bool em:1;
+ bool ts:1;
+ bool et:1;
+ bool ne:1;
+ uint32_t :10; /* 6:15 reserved */
+ bool wp:1;
+ uint32_t :1; /* 17 reserved */
+ bool am:1;
+ uint32_t :10; /* 19:28 reserved */
+ bool nw:1;
+ bool cd:1;
+ bool pg:1;
+};
+
+struct cr4_bits {
+ bool vme:1;
+ bool pvi:1;
+ bool tsd:1;
+ bool de:1;
+ bool pse:1;
+ bool pae:1;
+ bool mce:1;
+ bool pge:1;
+ bool pce:1;
+ bool osfxsr:1;
+ bool osxmmexcpt:1;
+ bool umip:1;
+ uint32_t :1; /* 12 reserved */
+ bool vmxe:1;
+ bool smxe:1;
+ uint32_t :1; /* 15 reserved */
+ bool fsgsbase:1;
+ bool pcide:1;
+ bool osxsave:1;
+ uint32_t :1; /* 19 reserved */
+ bool smep:1;
+ bool smap:1;
+ bool pke:1;
+ uint32_t :9; /* 23:31 reserved */
+};
+
/* MSR policy object for shared per-domain MSRs */
struct msr_domain_policy
{
@@ -210,6 +375,255 @@ struct msr_domain_policy
bool available; /* This MSR is non-architectural */
bool cpuid_faulting;
} plaform_info;
+
+ /* 0x00000480 MSR_IA32_VMX_BASIC */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ uint32_t vmcs_revision_id:31;
+ bool mbz:1; /* 31 always zero */
+ uint32_t vmcs_region_size:13;
+ uint32_t :3; /* 45:47 reserved */
+ bool addresses_32bit:1;
+ bool dual_monitor:1;
+ uint32_t memory_type:4;
+ bool ins_out_info:1;
+ bool default1_zero:1;
+ uint32_t :8; /* 56:63 reserved */
+ };
+ } u;
+ } vmx_basic;
+
+ /* 0x00000481 MSR_IA32_VMX_PINBASED_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_pin_based_exec_control_bits allowed_0;
+ union vmx_pin_based_exec_control_bits allowed_1;
+ };
+ } u;
+ } vmx_pinbased_ctls;
+
+ /* 0x00000482 MSR_IA32_VMX_PROCBASED_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_cpu_based_exec_control_bits allowed_0;
+ union vmx_cpu_based_exec_control_bits allowed_1;
+ };
+ } u;
+ } vmx_procbased_ctls;
+
+ /* 0x00000483 MSR_IA32_VMX_EXIT_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_vmexit_control_bits allowed_0;
+ union vmx_vmexit_control_bits allowed_1;
+ };
+ } u;
+ } vmx_exit_ctls;
+
+ /* 0x00000484 MSR_IA32_VMX_ENTRY_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_vmentry_control_bits allowed_0;
+ union vmx_vmentry_control_bits allowed_1;
+ };
+ } u;
+ } vmx_entry_ctls;
+
+ /* 0x00000485 MSR_IA32_VMX_MISC */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ uint32_t preempt_timer_scale:5;
+ bool vmexit_stores_lma:1;
+ bool hlt_activity_state:1;
+ bool shutdown_activity_state:1;
+ bool wait_for_sipi_activity_state:1;
+ uint32_t :5; /* 9:13 reserved */
+ bool pt_in_vmx:1;
+ bool ia32_smbase_support:1;
+ uint32_t cr3_target:9;
+ uint32_t max_msr_load_count:3;
+ bool ia32_smm_monitor_ctl_bit2:1;
+ bool vmwrite_all:1;
+ bool inject_ilen0_event:1;
+ uint32_t :1; /* 31 reserved */
+ uint32_t mseg_revision_id;
+ };
+ } u;
+ } vmx_misc;
+
+ /* 0x00000486 MSR_IA32_VMX_CR0_FIXED0 */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct cr0_bits allowed_0;
+ } u;
+ } vmx_cr0_fixed0;
+
+ /* 0x00000487 MSR_IA32_VMX_CR0_FIXED1 */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct cr0_bits allowed_1;
+ } u;
+ } vmx_cr0_fixed1;
+
+ /* 0x00000488 MSR_IA32_VMX_CR4_FIXED0 */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct cr4_bits allowed_0;
+ } u;
+ } vmx_cr4_fixed0;
+
+ /* 0x00000489 MSR_IA32_VMX_CR4_FIXED1 */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct cr4_bits allowed_1;
+ } u;
+ } vmx_cr4_fixed1;
+
+ /* 0x0000048A MSR_IA32_VMX_VMCS_ENUM */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ uint32_t :1; /* 0 reserved */
+ uint32_t vmcs_encoding_max_idx:9;
+ uint64_t :54; /* 10:63 reserved */
+ };
+ } u;
+ } vmx_vmcs_enum;
+
+ /* 0x0000048B MSR_IA32_VMX_PROCBASED_CTLS2 */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_secondary_exec_control_bits allowed_0;
+ union vmx_secondary_exec_control_bits allowed_1;
+ };
+ } u;
+ } vmx_procbased_ctls2;
+
+ /* 0x0000048C MSR_IA32_VMX_EPT_VPID_CAP */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ bool exec_only_supported:1;
+ uint32_t :5; /* 1:5 reserved */
+ bool walk_length_4_supported:1;
+ uint32_t :1; /* 7 reserved */
+ bool memory_type_uc:1;
+ uint32_t :5; /* 9:13 reserved */
+ bool memory_type_wb:1;
+ uint32_t :1; /* 15 reserved */
+ bool superpage_2mb:1;
+ bool superpage_1gb:1;
+ uint32_t :2; /* 18:19 reserved */
+ bool invept_instruction:1;
+ bool ad_bit:1;
+ bool advanced_ept_violations:1;
+ uint32_t :2; /* 23:24 reserved */
+ bool invept_single_context:1;
+ bool invept_all_context:1;
+ uint32_t :5; /* 27:31 reserved */
+ bool invvpid_instruction:1;
+ uint32_t :7; /* 33:39 reserved */
+ bool invvpid_individual_addr:1;
+ bool invvpid_single_context:1;
+ bool invvpid_all_context:1;
+ bool invvpid_single_context_retaining_global:1;
+ uint32_t :20; /* 44:63 reserved */
+ };
+ } u;
+ } vmx_ept_vpid_cap;
+
+ /* 0x0000048D MSR_IA32_VMX_TRUE_PINBASED_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_pin_based_exec_control_bits allowed_0;
+ union vmx_pin_based_exec_control_bits allowed_1;
+ };
+ } u;
+ } vmx_true_pinbased_ctls;
+
+ /* 0x0000048E MSR_IA32_VMX_TRUE_PROCBASED_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_cpu_based_exec_control_bits allowed_0;
+ union vmx_cpu_based_exec_control_bits allowed_1;
+ };
+ } u;
+ } vmx_true_procbased_ctls;
+
+ /* 0x0000048F MSR_IA32_VMX_TRUE_EXIT_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_vmexit_control_bits allowed_0;
+ union vmx_vmexit_control_bits allowed_1;
+ };
+ } u;
+ } vmx_true_exit_ctls;
+
+ /* 0x00000490 MSR_IA32_VMX_TRUE_ENTRY_CTLS */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ union vmx_vmentry_control_bits allowed_0;
+ union vmx_vmentry_control_bits allowed_1;
+ };
+ } u;
+ } vmx_true_entry_ctls;
+
+ /* 0x00000491 MSR_IA32_VMX_VMFUNC */
+ struct {
+ bool available;
+ union {
+ uint64_t raw;
+ struct {
+ bool eptp_switching:1;
+ uint64_t :63; /* 1:63 reserved */
+ };
+ } u;
+ } vmx_vmfunc;
};
/* MSR policy object for per-vCPU MSRs */
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 3/6] x86/msr: read VMX MSRs values into Raw policy
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 1/6] x86/msr: add Raw and Host domain policies Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy Sergey Dyasli
@ 2017-10-13 12:35 ` Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 4/6] x86/msr: add VMX MSRs into HVM_max domain policy Sergey Dyasli
` (2 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
Add calculate_raw_vmx_policy() which fills Raw policy with H/W values
of VMX MSRs. Host policy will contain a copy of these values.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
xen/arch/x86/msr.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 24029a2ac1..955aba0849 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -32,6 +32,81 @@ struct msr_domain_policy __read_mostly raw_msr_domain_policy,
struct msr_vcpu_policy __read_mostly hvm_max_msr_vcpu_policy,
__read_mostly pv_max_msr_vcpu_policy;
+static void __init calculate_raw_vmx_policy(struct msr_domain_policy *dp)
+{
+ if ( !cpu_has_vmx )
+ return;
+
+ dp->vmx_basic.available = true;
+ rdmsrl(MSR_IA32_VMX_BASIC, dp->vmx_basic.u.raw);
+
+ dp->vmx_pinbased_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_PINBASED_CTLS, dp->vmx_pinbased_ctls.u.raw);
+
+ dp->vmx_procbased_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_PROCBASED_CTLS, dp->vmx_procbased_ctls.u.raw);
+
+ dp->vmx_exit_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_EXIT_CTLS, dp->vmx_exit_ctls.u.raw);
+
+ dp->vmx_entry_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_ENTRY_CTLS, dp->vmx_entry_ctls.u.raw);
+
+ dp->vmx_misc.available = true;
+ rdmsrl(MSR_IA32_VMX_MISC, dp->vmx_misc.u.raw);
+
+ dp->vmx_cr0_fixed0.available = true;
+ rdmsrl(MSR_IA32_VMX_CR0_FIXED0, dp->vmx_cr0_fixed0.u.raw);
+
+ dp->vmx_cr0_fixed1.available = true;
+ rdmsrl(MSR_IA32_VMX_CR0_FIXED1, dp->vmx_cr0_fixed1.u.raw);
+
+ dp->vmx_cr4_fixed0.available = true;
+ rdmsrl(MSR_IA32_VMX_CR4_FIXED0, dp->vmx_cr4_fixed0.u.raw);
+
+ dp->vmx_cr4_fixed1.available = true;
+ rdmsrl(MSR_IA32_VMX_CR4_FIXED1, dp->vmx_cr4_fixed1.u.raw);
+
+ dp->vmx_vmcs_enum.available = true;
+ rdmsrl(MSR_IA32_VMX_VMCS_ENUM, dp->vmx_vmcs_enum.u.raw);
+
+ if ( dp->vmx_procbased_ctls.u.allowed_1.activate_secondary_controls )
+ {
+ dp->vmx_procbased_ctls2.available = true;
+ rdmsrl(MSR_IA32_VMX_PROCBASED_CTLS2, dp->vmx_procbased_ctls2.u.raw);
+
+ if ( dp->vmx_procbased_ctls2.u.allowed_1.enable_ept ||
+ dp->vmx_procbased_ctls2.u.allowed_1.enable_vpid )
+ {
+ dp->vmx_ept_vpid_cap.available = true;
+ rdmsrl(MSR_IA32_VMX_EPT_VPID_CAP, dp->vmx_ept_vpid_cap.u.raw);
+ }
+ }
+
+ if ( dp->vmx_basic.u.default1_zero )
+ {
+ dp->vmx_true_pinbased_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_TRUE_PINBASED_CTLS,
+ dp->vmx_true_pinbased_ctls.u.raw);
+
+ dp->vmx_true_procbased_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_TRUE_PROCBASED_CTLS,
+ dp->vmx_true_procbased_ctls.u.raw);
+
+ dp->vmx_true_exit_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_TRUE_EXIT_CTLS, dp->vmx_true_exit_ctls.u.raw);
+
+ dp->vmx_true_entry_ctls.available = true;
+ rdmsrl(MSR_IA32_VMX_TRUE_ENTRY_CTLS, dp->vmx_true_entry_ctls.u.raw);
+ }
+
+ if ( dp->vmx_procbased_ctls2.u.allowed_1.enable_vm_functions )
+ {
+ dp->vmx_vmfunc.available = true;
+ rdmsrl(MSR_IA32_VMX_VMFUNC, dp->vmx_vmfunc.u.raw);
+ }
+}
+
static void __init calculate_raw_policy(void)
{
struct msr_domain_policy *dp = &raw_msr_domain_policy;
@@ -43,6 +118,8 @@ static void __init calculate_raw_policy(void)
if ( val & MSR_PLATFORM_INFO_CPUID_FAULTING )
dp->plaform_info.cpuid_faulting = true;
}
+
+ calculate_raw_vmx_policy(dp);
}
static void __init calculate_host_policy(void)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 4/6] x86/msr: add VMX MSRs into HVM_max domain policy
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
` (2 preceding siblings ...)
2017-10-13 12:35 ` [PATCH v3 3/6] x86/msr: read VMX MSRs values into Raw policy Sergey Dyasli
@ 2017-10-13 12:35 ` Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr() Sergey Dyasli
5 siblings, 0 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
Currently, when nested virt is enabled, the set of L1 VMX features
is fixed and calculated by nvmx_msr_read_intercept() as an intersection
between the full set of Xen's supported L1 VMX features, the set of
actual H/W features and, for MSR_IA32_VMX_EPT_VPID_CAP, the set of
features that Xen uses.
Add calculate_hvm_max_vmx_policy() which will save the end result of
nvmx_msr_read_intercept() on current H/W into HVM_max domain policy.
There will be no functional change to what L1 sees in VMX MSRs. But the
actual use of HVM_max domain policy will happen later, when VMX MSRs
are handled by guest_rd/wrmsr().
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
xen/arch/x86/msr.c | 140 +++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 140 insertions(+)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 955aba0849..388f19e50d 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -129,6 +129,144 @@ static void __init calculate_host_policy(void)
*dp = raw_msr_domain_policy;
}
+#define vmx_host_allowed_cpy(dp, msr, field) \
+ do { \
+ dp->msr.u.allowed_1.field = \
+ host_msr_domain_policy.msr.u.allowed_1.field; \
+ dp->msr.u.allowed_0.field = \
+ host_msr_domain_policy.msr.u.allowed_0.field; \
+ } while (0)
+
+static void __init calculate_hvm_max_vmx_policy(struct msr_domain_policy *dp)
+{
+ if ( !cpu_has_vmx )
+ return;
+
+ dp->vmx_basic.available = true;
+ dp->vmx_basic.u.raw = host_msr_domain_policy.vmx_basic.u.raw;
+
+ dp->vmx_pinbased_ctls.available = true;
+ dp->vmx_pinbased_ctls.u.raw =
+ ((uint64_t) VMX_PINBASED_CTLS_DEFAULT1 << 32) |
+ VMX_PINBASED_CTLS_DEFAULT1;
+ vmx_host_allowed_cpy(dp, vmx_pinbased_ctls, ext_intr_exiting);
+ vmx_host_allowed_cpy(dp, vmx_pinbased_ctls, nmi_exiting);
+ vmx_host_allowed_cpy(dp, vmx_pinbased_ctls, preempt_timer);
+
+ dp->vmx_procbased_ctls.available = true;
+ dp->vmx_procbased_ctls.u.raw =
+ ((uint64_t) VMX_PROCBASED_CTLS_DEFAULT1 << 32) |
+ VMX_PROCBASED_CTLS_DEFAULT1;
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, virtual_intr_pending);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, use_tsc_offseting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, hlt_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, invlpg_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, mwait_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, rdpmc_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, rdtsc_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, cr8_load_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, cr8_store_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, tpr_shadow);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, virtual_nmi_pending);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, mov_dr_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, uncond_io_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, activate_io_bitmap);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, monitor_trap_flag);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, activate_msr_bitmap);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, monitor_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, pause_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls, activate_secondary_controls);
+
+ dp->vmx_exit_ctls.available = true;
+ dp->vmx_exit_ctls.u.raw =
+ ((uint64_t) VMX_EXIT_CTLS_DEFAULT1 << 32) |
+ VMX_EXIT_CTLS_DEFAULT1;
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, ia32e_mode);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, load_perf_global_ctrl);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, ack_intr_on_exit);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, save_guest_pat);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, load_host_pat);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, save_guest_efer);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, load_host_efer);
+ vmx_host_allowed_cpy(dp, vmx_exit_ctls, save_preempt_timer);
+
+ dp->vmx_entry_ctls.available = true;
+ dp->vmx_entry_ctls.u.raw =
+ ((uint64_t) VMX_ENTRY_CTLS_DEFAULT1 << 32) |
+ VMX_ENTRY_CTLS_DEFAULT1;
+ vmx_host_allowed_cpy(dp, vmx_entry_ctls, ia32e_mode);
+ vmx_host_allowed_cpy(dp, vmx_entry_ctls, load_perf_global_ctrl);
+ vmx_host_allowed_cpy(dp, vmx_entry_ctls, load_guest_pat);
+ vmx_host_allowed_cpy(dp, vmx_entry_ctls, load_guest_efer);
+
+ dp->vmx_misc.available = true;
+ dp->vmx_misc.u.raw = host_msr_domain_policy.vmx_misc.u.raw;
+ /* Do not support CR3-target feature now */
+ dp->vmx_misc.u.cr3_target = false;
+
+ dp->vmx_cr0_fixed0.available = true;
+ /* PG, PE bits must be 1 in VMX operation */
+ dp->vmx_cr0_fixed0.u.allowed_0.pe = true;
+ dp->vmx_cr0_fixed0.u.allowed_0.pg = true;
+
+ dp->vmx_cr0_fixed1.available = true;
+ /* allow 0-settings for all bits */
+ dp->vmx_cr0_fixed1.u.raw = 0xffffffff;
+
+ dp->vmx_cr4_fixed0.available = true;
+ /* VMXE bit must be 1 in VMX operation */
+ dp->vmx_cr4_fixed0.u.allowed_0.vmxe = true;
+
+ dp->vmx_cr4_fixed1.available = true;
+ /*
+ * Allowed CR4 bits will be updated during domain creation by
+ * hvm_cr4_guest_valid_bits()
+ */
+ dp->vmx_cr4_fixed1.u.raw = host_msr_domain_policy.vmx_cr4_fixed1.u.raw;
+
+ dp->vmx_vmcs_enum.available = true;
+ /* The max index of VVMCS encoding is 0x1f. */
+ dp->vmx_vmcs_enum.u.vmcs_encoding_max_idx = 0x1f;
+
+ if ( dp->vmx_procbased_ctls.u.allowed_1.activate_secondary_controls )
+ {
+ dp->vmx_procbased_ctls2.available = true;
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls2, virtualize_apic_accesses);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls2, enable_ept);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls2, descriptor_table_exiting);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls2, enable_vpid);
+ vmx_host_allowed_cpy(dp, vmx_procbased_ctls2, unrestricted_guest);
+
+ if ( dp->vmx_procbased_ctls2.u.allowed_1.enable_ept ||
+ dp->vmx_procbased_ctls2.u.allowed_1.enable_vpid )
+ {
+ dp->vmx_ept_vpid_cap.available = true;
+ dp->vmx_ept_vpid_cap.u.raw = nept_get_ept_vpid_cap();
+ }
+ }
+
+ if ( dp->vmx_basic.u.default1_zero )
+ {
+ dp->vmx_true_pinbased_ctls.available = true;
+ dp->vmx_true_pinbased_ctls.u.raw = dp->vmx_pinbased_ctls.u.raw;
+
+ dp->vmx_true_procbased_ctls.available = true;
+ dp->vmx_true_procbased_ctls.u.raw = dp->vmx_procbased_ctls.u.raw;
+ vmx_host_allowed_cpy(dp, vmx_true_procbased_ctls, cr3_load_exiting);
+ vmx_host_allowed_cpy(dp, vmx_true_procbased_ctls, cr3_store_exiting);
+
+ dp->vmx_true_exit_ctls.available = true;
+ dp->vmx_true_exit_ctls.u.raw = dp->vmx_exit_ctls.u.raw;
+
+ dp->vmx_true_entry_ctls.available = true;
+ dp->vmx_true_entry_ctls.u.raw = dp->vmx_entry_ctls.u.raw;
+ }
+
+ dp->vmx_vmfunc.available = false;
+}
+
+#undef vmx_host_allowed_cpy
+
static void __init calculate_hvm_max_policy(void)
{
struct msr_domain_policy *dp = &hvm_max_msr_domain_policy;
@@ -146,6 +284,8 @@ static void __init calculate_hvm_max_policy(void)
/* 0x00000140 MSR_INTEL_MISC_FEATURES_ENABLES */
vp->misc_features_enables.available = dp->plaform_info.available;
+
+ calculate_hvm_max_vmx_policy(dp);
}
static void __init calculate_pv_max_policy(void)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
` (3 preceding siblings ...)
2017-10-13 12:35 ` [PATCH v3 4/6] x86/msr: add VMX MSRs into HVM_max domain policy Sergey Dyasli
@ 2017-10-13 12:35 ` Sergey Dyasli
2017-10-13 15:25 ` Andrew Cooper
2017-10-13 12:35 ` [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr() Sergey Dyasli
5 siblings, 1 reply; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
Availability of some MSRs depends on certain CPUID bits. Add function
recalculate_domain_msr_policy() which updates availability of per-domain
MSRs based on current domain's CPUID policy. This function is called
when CPUID policy is changed from a toolstack.
Add recalculate_domain_vmx_msr_policy() which changes availability of
VMX MSRs based on domain's nested virt settings.
Introduce hvm_cr4_domain_valid_bits() which accepts struct domain *
instead of struct vcpu *.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
xen/arch/x86/domctl.c | 1 +
xen/arch/x86/hvm/hvm.c | 8 +++--
xen/arch/x86/msr.c | 70 ++++++++++++++++++++++++++++++++++++++++++-
xen/include/asm-x86/hvm/hvm.h | 1 +
xen/include/asm-x86/msr.h | 3 ++
5 files changed, 80 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 80b4df9ec9..334c67d261 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -124,6 +124,7 @@ static int update_domain_cpuid_info(struct domain *d,
}
recalculate_cpuid_policy(d);
+ recalculate_domain_msr_policy(d);
switch ( ctl->input[0] )
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 205b4cb685..7e6b15f8d7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -928,9 +928,8 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
X86_CR0_CD | X86_CR0_PG)))
/* These bits in CR4 can be set by the guest. */
-unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
+unsigned long hvm_cr4_domain_valid_bits(const struct domain *d, bool restore)
{
- const struct domain *d = v->domain;
const struct cpuid_policy *p;
bool mce, vmxe;
@@ -963,6 +962,11 @@ unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
(p->feat.pku ? X86_CR4_PKE : 0));
}
+unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
+{
+ return hvm_cr4_domain_valid_bits(v->domain, restore);
+}
+
static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 388f19e50d..a22e3dfaf2 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -23,6 +23,7 @@
#include <xen/lib.h>
#include <xen/sched.h>
#include <asm/msr.h>
+#include <asm/hvm/nestedhvm.h>
struct msr_domain_policy __read_mostly raw_msr_domain_policy,
__read_mostly host_msr_domain_policy,
@@ -220,7 +221,7 @@ static void __init calculate_hvm_max_vmx_policy(struct msr_domain_policy *dp)
dp->vmx_cr4_fixed1.available = true;
/*
* Allowed CR4 bits will be updated during domain creation by
- * hvm_cr4_guest_valid_bits()
+ * hvm_cr4_domain_valid_bits()
*/
dp->vmx_cr4_fixed1.u.raw = host_msr_domain_policy.vmx_cr4_fixed1.u.raw;
@@ -312,6 +313,72 @@ void __init init_guest_msr_policy(void)
calculate_pv_max_policy();
}
+static void recalculate_domain_vmx_msr_policy(struct domain *d)
+{
+ struct msr_domain_policy *dp = d->arch.msr;
+
+ if ( !nestedhvm_enabled(d) || !d->arch.cpuid->basic.vmx )
+ {
+ dp->vmx_basic.available = false;
+ dp->vmx_pinbased_ctls.available = false;
+ dp->vmx_procbased_ctls.available = false;
+ dp->vmx_exit_ctls.available = false;
+ dp->vmx_entry_ctls.available = false;
+ dp->vmx_misc.available = false;
+ dp->vmx_cr0_fixed0.available = false;
+ dp->vmx_cr0_fixed1.available = false;
+ dp->vmx_cr4_fixed0.available = false;
+ dp->vmx_cr4_fixed1.available = false;
+ dp->vmx_vmcs_enum.available = false;
+ dp->vmx_procbased_ctls2.available = false;
+ dp->vmx_ept_vpid_cap.available = false;
+ dp->vmx_true_pinbased_ctls.available = false;
+ dp->vmx_true_procbased_ctls.available = false;
+ dp->vmx_true_exit_ctls.available = false;
+ dp->vmx_true_entry_ctls.available = false;
+ }
+ else
+ {
+ dp->vmx_basic.available = true;
+ dp->vmx_pinbased_ctls.available = true;
+ dp->vmx_procbased_ctls.available = true;
+ dp->vmx_exit_ctls.available = true;
+ dp->vmx_entry_ctls.available = true;
+ dp->vmx_misc.available = true;
+ dp->vmx_cr0_fixed0.available = true;
+ dp->vmx_cr0_fixed1.available = true;
+ dp->vmx_cr4_fixed0.available = true;
+ dp->vmx_cr4_fixed1.available = true;
+ /* Get allowed CR4 bits from CPUID policy */
+ dp->vmx_cr4_fixed1.u.raw = hvm_cr4_domain_valid_bits(d, false);
+ dp->vmx_vmcs_enum.available = true;
+
+ if ( dp->vmx_procbased_ctls.u.allowed_1.activate_secondary_controls )
+ {
+ dp->vmx_procbased_ctls2.available = true;
+
+ if ( dp->vmx_procbased_ctls2.u.allowed_1.enable_ept ||
+ dp->vmx_procbased_ctls2.u.allowed_1.enable_vpid )
+ dp->vmx_ept_vpid_cap.available = true;
+ }
+
+ if ( dp->vmx_basic.u.default1_zero )
+ {
+ dp->vmx_true_pinbased_ctls.available = true;
+ dp->vmx_true_procbased_ctls.available = true;
+ dp->vmx_true_exit_ctls.available = true;
+ dp->vmx_true_entry_ctls.available = true;
+ }
+ }
+
+ dp->vmx_vmfunc.available = false;
+}
+
+void recalculate_domain_msr_policy(struct domain *d)
+{
+ recalculate_domain_vmx_msr_policy(d);
+}
+
int init_domain_msr_policy(struct domain *d)
{
struct msr_domain_policy *dp;
@@ -332,6 +399,7 @@ int init_domain_msr_policy(struct domain *d)
}
d->arch.msr = dp;
+ recalculate_domain_msr_policy(d);
return 0;
}
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index b687e03dce..6ff38a6400 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -613,6 +613,7 @@ static inline bool altp2m_vcpu_emulate_ve(struct vcpu *v)
const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
signed int cr0_pg);
unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore);
+unsigned long hvm_cr4_domain_valid_bits(const struct domain *d, bool restore);
/*
* This must be defined as a macro instead of an inline function,
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index fc99612cca..df8f60e538 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -649,6 +649,9 @@ int init_vcpu_msr_policy(struct vcpu *v);
int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val);
int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val);
+/* Update availability of per-domain MSRs based on CPUID policy */
+void recalculate_domain_msr_policy(struct domain *d);
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_MSR_H */
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr()
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
` (4 preceding siblings ...)
2017-10-13 12:35 ` [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes Sergey Dyasli
@ 2017-10-13 12:35 ` Sergey Dyasli
2017-10-13 15:38 ` Andrew Cooper
5 siblings, 1 reply; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-13 12:35 UTC (permalink / raw)
To: xen-devel
Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli
Now that each domain has a correct view of VMX MSRs in it's per-domain
MSR policy, it's possible to handle guest's RD/WRMSR with the new
handlers. Do it and remove the old nvmx_msr_read_intercept() and
associated bits.
There is no functional change to what a guest sees in VMX MSRs.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
xen/arch/x86/hvm/vmx/vmx.c | 6 --
xen/arch/x86/hvm/vmx/vvmx.c | 178 -------------------------------------
xen/arch/x86/msr.c | 34 +++++++
xen/include/asm-x86/hvm/vmx/vvmx.h | 2 -
4 files changed, 34 insertions(+), 186 deletions(-)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index c2148701ee..1a1cb98069 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2906,10 +2906,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
if ( nestedhvm_enabled(curr->domain) )
*msr_content |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
break;
- case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
- if ( !nvmx_msr_read_intercept(msr, msr_content) )
- goto gp_fault;
- break;
case MSR_IA32_MISC_ENABLE:
rdmsrl(MSR_IA32_MISC_ENABLE, *msr_content);
/* Debug Trace Store is not supported. */
@@ -3133,8 +3129,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
break;
}
case MSR_IA32_FEATURE_CONTROL:
- case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
- /* None of these MSRs are writeable. */
goto gp_fault;
case MSR_P6_PERFCTR(0)...MSR_P6_PERFCTR(7):
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dde02c076b..b0474ad310 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1976,184 +1976,6 @@ int nvmx_handle_invvpid(struct cpu_user_regs *regs)
return X86EMUL_OKAY;
}
-#define __emul_value(enable1, default1) \
- ((enable1 | default1) << 32 | (default1))
-
-#define gen_vmx_msr(enable1, default1, host_value) \
- (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
- ((uint32_t)(__emul_value(enable1, default1) | host_value)))
-
-/*
- * Capability reporting
- */
-int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
-{
- struct vcpu *v = current;
- struct domain *d = v->domain;
- u64 data = 0, host_data = 0;
- int r = 1;
-
- /* VMX capablity MSRs are available only when guest supports VMX. */
- if ( !nestedhvm_enabled(d) || !d->arch.cpuid->basic.vmx )
- return 0;
-
- /*
- * These MSRs are only available when flags in other MSRs are set.
- * These prerequisites are listed in the Intel 64 and IA-32
- * Architectures Software Developer’s Manual, Vol 3, Appendix A.
- */
- switch ( msr )
- {
- case MSR_IA32_VMX_PROCBASED_CTLS2:
- if ( !cpu_has_vmx_secondary_exec_control )
- return 0;
- break;
-
- case MSR_IA32_VMX_EPT_VPID_CAP:
- if ( !(cpu_has_vmx_ept || cpu_has_vmx_vpid) )
- return 0;
- break;
-
- case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
- case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
- case MSR_IA32_VMX_TRUE_EXIT_CTLS:
- case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
- if ( !(vmx_basic_msr & VMX_BASIC_DEFAULT1_ZERO) )
- return 0;
- break;
-
- case MSR_IA32_VMX_VMFUNC:
- if ( !cpu_has_vmx_vmfunc )
- return 0;
- break;
- }
-
- rdmsrl(msr, host_data);
-
- /*
- * Remove unsupport features from n1 guest capability MSR
- */
- switch (msr) {
- case MSR_IA32_VMX_BASIC:
- {
- const struct vmcs_struct *vmcs =
- map_domain_page(_mfn(PFN_DOWN(v->arch.hvm_vmx.vmcs_pa)));
-
- data = (host_data & (~0ul << 32)) |
- (vmcs->vmcs_revision_id & 0x7fffffff);
- unmap_domain_page(vmcs);
- break;
- }
- case MSR_IA32_VMX_PINBASED_CTLS:
- case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
- /* 1-settings */
- data = PIN_BASED_EXT_INTR_MASK |
- PIN_BASED_NMI_EXITING |
- PIN_BASED_PREEMPT_TIMER;
- data = gen_vmx_msr(data, VMX_PINBASED_CTLS_DEFAULT1, host_data);
- break;
- case MSR_IA32_VMX_PROCBASED_CTLS:
- case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
- {
- u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
- /* 1-settings */
- data = CPU_BASED_HLT_EXITING |
- CPU_BASED_VIRTUAL_INTR_PENDING |
- CPU_BASED_CR8_LOAD_EXITING |
- CPU_BASED_CR8_STORE_EXITING |
- CPU_BASED_INVLPG_EXITING |
- CPU_BASED_CR3_LOAD_EXITING |
- CPU_BASED_CR3_STORE_EXITING |
- CPU_BASED_MONITOR_EXITING |
- CPU_BASED_MWAIT_EXITING |
- CPU_BASED_MOV_DR_EXITING |
- CPU_BASED_ACTIVATE_IO_BITMAP |
- CPU_BASED_USE_TSC_OFFSETING |
- CPU_BASED_UNCOND_IO_EXITING |
- CPU_BASED_RDTSC_EXITING |
- CPU_BASED_MONITOR_TRAP_FLAG |
- CPU_BASED_VIRTUAL_NMI_PENDING |
- CPU_BASED_ACTIVATE_MSR_BITMAP |
- CPU_BASED_PAUSE_EXITING |
- CPU_BASED_RDPMC_EXITING |
- CPU_BASED_TPR_SHADOW |
- CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-
- if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
- default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
- CPU_BASED_CR3_STORE_EXITING |
- CPU_BASED_INVLPG_EXITING);
-
- data = gen_vmx_msr(data, default1_bits, host_data);
- break;
- }
- case MSR_IA32_VMX_PROCBASED_CTLS2:
- /* 1-settings */
- data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
- SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
- SECONDARY_EXEC_ENABLE_VPID |
- SECONDARY_EXEC_UNRESTRICTED_GUEST |
- SECONDARY_EXEC_ENABLE_EPT;
- data = gen_vmx_msr(data, 0, host_data);
- break;
- case MSR_IA32_VMX_EXIT_CTLS:
- case MSR_IA32_VMX_TRUE_EXIT_CTLS:
- /* 1-settings */
- data = VM_EXIT_ACK_INTR_ON_EXIT |
- VM_EXIT_IA32E_MODE |
- VM_EXIT_SAVE_PREEMPT_TIMER |
- VM_EXIT_SAVE_GUEST_PAT |
- VM_EXIT_LOAD_HOST_PAT |
- VM_EXIT_SAVE_GUEST_EFER |
- VM_EXIT_LOAD_HOST_EFER |
- VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
- data = gen_vmx_msr(data, VMX_EXIT_CTLS_DEFAULT1, host_data);
- break;
- case MSR_IA32_VMX_ENTRY_CTLS:
- case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
- /* 1-settings */
- data = VM_ENTRY_LOAD_GUEST_PAT |
- VM_ENTRY_LOAD_GUEST_EFER |
- VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
- VM_ENTRY_IA32E_MODE;
- data = gen_vmx_msr(data, VMX_ENTRY_CTLS_DEFAULT1, host_data);
- break;
-
- case MSR_IA32_VMX_VMCS_ENUM:
- /* The max index of VVMCS encoding is 0x1f. */
- data = 0x1f << 1;
- break;
- case MSR_IA32_VMX_CR0_FIXED0:
- /* PG, PE bits must be 1 in VMX operation */
- data = X86_CR0_PE | X86_CR0_PG;
- break;
- case MSR_IA32_VMX_CR0_FIXED1:
- /* allow 0-settings for all bits */
- data = 0xffffffff;
- break;
- case MSR_IA32_VMX_CR4_FIXED0:
- /* VMXE bit must be 1 in VMX operation */
- data = X86_CR4_VMXE;
- break;
- case MSR_IA32_VMX_CR4_FIXED1:
- data = hvm_cr4_guest_valid_bits(v, 0);
- break;
- case MSR_IA32_VMX_MISC:
- /* Do not support CR3-target feature now */
- data = host_data & ~VMX_MISC_CR3_TARGET;
- break;
- case MSR_IA32_VMX_EPT_VPID_CAP:
- data = nept_get_ept_vpid_cap();
- break;
- default:
- r = 0;
- break;
- }
-
- *msr_content = data;
- return r;
-}
-
/* This function uses L2_gpa to walk the P2M page table in L1. If the
* walk is successful, the translated value is returned in
* L1_gpa. The result value tells what to do next.
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index a22e3dfaf2..2527fdd1d1 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -426,6 +426,13 @@ int init_vcpu_msr_policy(struct vcpu *v)
return 0;
}
+#define vmx_guest_rdmsr(dp, name, msr) \
+ case name: \
+ if ( !dp->msr.available ) \
+ goto gp_fault; \
+ *val = dp->msr.u.raw; \
+ break;
+
int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
{
const struct msr_domain_policy *dp = v->domain->arch.msr;
@@ -447,6 +454,27 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
_MSR_MISC_FEATURES_CPUID_FAULTING;
break;
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_BASIC, vmx_basic);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_PINBASED_CTLS, vmx_pinbased_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_PROCBASED_CTLS, vmx_procbased_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_EXIT_CTLS, vmx_exit_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_ENTRY_CTLS, vmx_entry_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_MISC, vmx_misc);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_CR0_FIXED0, vmx_cr0_fixed0);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_CR0_FIXED1, vmx_cr0_fixed1);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_CR4_FIXED0, vmx_cr4_fixed0);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_CR4_FIXED1, vmx_cr4_fixed1);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_VMCS_ENUM, vmx_vmcs_enum);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_PROCBASED_CTLS2, vmx_procbased_ctls2);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_EPT_VPID_CAP, vmx_ept_vpid_cap);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_TRUE_PINBASED_CTLS,
+ vmx_true_pinbased_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_TRUE_PROCBASED_CTLS,
+ vmx_true_procbased_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_TRUE_EXIT_CTLS, vmx_true_exit_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_TRUE_ENTRY_CTLS, vmx_true_entry_ctls);
+ vmx_guest_rdmsr(dp, MSR_IA32_VMX_VMFUNC, vmx_vmfunc);
+
default:
return X86EMUL_UNHANDLEABLE;
}
@@ -457,6 +485,8 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
return X86EMUL_EXCEPTION;
}
+#undef vmx_guest_rdmsr
+
int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
{
struct domain *d = v->domain;
@@ -491,6 +521,10 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
break;
}
+ case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
+ /* None of these MSRs are writeable. */
+ goto gp_fault;
+
default:
return X86EMUL_UNHANDLEABLE;
}
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 3285b03bbb..5950672e93 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -222,8 +222,6 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs);
int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
int nvmx_handle_invept(struct cpu_user_regs *regs);
int nvmx_handle_invvpid(struct cpu_user_regs *regs);
-int nvmx_msr_read_intercept(unsigned int msr,
- u64 *msr_content);
void nvmx_update_exec_control(struct vcpu *v, u32 value);
void nvmx_update_secondary_exec_control(struct vcpu *v,
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy
2017-10-13 12:35 ` [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy Sergey Dyasli
@ 2017-10-13 15:16 ` Andrew Cooper
2017-10-16 7:42 ` Sergey Dyasli
0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-10-13 15:16 UTC (permalink / raw)
To: Sergey Dyasli, xen-devel; +Cc: Kevin Tian, Jan Beulich, Jun Nakajima
On 13/10/17 13:35, Sergey Dyasli wrote:
> @@ -210,6 +375,255 @@ struct msr_domain_policy
> bool available; /* This MSR is non-architectural */
> bool cpuid_faulting;
> } plaform_info;
> +
> + /* 0x00000480 MSR_IA32_VMX_BASIC */
> + struct {
> + bool available;
We don't need available bits for any of these MSRs. Their availability
is cpuid->basic.vmx, and we don't want (let alone need) to duplicate
information like this.
The PLATFORM_INFO and MISC_FEATURES_ENABLE are special, because they
have no architecturally defined indication of availability.
> + union {
> + uint64_t raw;
> + struct {
> + uint32_t vmcs_revision_id:31;
> + bool mbz:1; /* 31 always zero */
> + uint32_t vmcs_region_size:13;
> + uint32_t :3; /* 45:47 reserved */
> + bool addresses_32bit:1;
> + bool dual_monitor:1;
> + uint32_t memory_type:4;
> + bool ins_out_info:1;
> + bool default1_zero:1;
> + uint32_t :8; /* 56:63 reserved */
> + };
> + } u;
The code will be rather shorter if you drop this .u and make the union
anonymous.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes
2017-10-13 12:35 ` [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes Sergey Dyasli
@ 2017-10-13 15:25 ` Andrew Cooper
2017-10-16 7:46 ` Sergey Dyasli
0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-10-13 15:25 UTC (permalink / raw)
To: Sergey Dyasli, xen-devel; +Cc: Kevin Tian, Jan Beulich, Jun Nakajima
On 13/10/17 13:35, Sergey Dyasli wrote:
> Availability of some MSRs depends on certain CPUID bits. Add function
> recalculate_domain_msr_policy() which updates availability of per-domain
> MSRs based on current domain's CPUID policy. This function is called
> when CPUID policy is changed from a toolstack.
This is probably acceptable for now. recalculate_cpuid_policy() is only
a transitory artefact between the current behaviour of the Xen, and the
future behaviour of auditing the toolstack-provided cpuid and msr policy
completely before changing the domains datastructures.
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 205b4cb685..7e6b15f8d7 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -928,9 +928,8 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
> X86_CR0_CD | X86_CR0_PG)))
>
> /* These bits in CR4 can be set by the guest. */
> -unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
> +unsigned long hvm_cr4_domain_valid_bits(const struct domain *d, bool restore)
> {
> - const struct domain *d = v->domain;
> const struct cpuid_policy *p;
> bool mce, vmxe;
>
> @@ -963,6 +962,11 @@ unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
> (p->feat.pku ? X86_CR4_PKE : 0));
> }
>
> +unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
I'd split this change out into a separate patch and change the existing
guest valid bits to taking a domain *.
It needed to take vcpu in the past because of the old cpuid
infrastructure, but it doesn't need to any more because of the
domain-wide struct cpuid policy.
~Andrew
> +{
> + return hvm_cr4_domain_valid_bits(v->domain, restore);
> +}
> +
> static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> {
> int vcpuid;
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr()
2017-10-13 12:35 ` [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr() Sergey Dyasli
@ 2017-10-13 15:38 ` Andrew Cooper
2017-10-16 14:50 ` Sergey Dyasli
0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-10-13 15:38 UTC (permalink / raw)
To: Sergey Dyasli, xen-devel; +Cc: Kevin Tian, Jan Beulich, Jun Nakajima
On 13/10/17 13:35, Sergey Dyasli wrote:
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index a22e3dfaf2..2527fdd1d1 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -426,6 +426,13 @@ int init_vcpu_msr_policy(struct vcpu *v)
> return 0;
> }
>
> +#define vmx_guest_rdmsr(dp, name, msr) \
> + case name: \
> + if ( !dp->msr.available ) \
> + goto gp_fault; \
> + *val = dp->msr.u.raw; \
> + break;
Eww :(
For blocks of MSRs, it would be far better to go with the same structure
as the cpuid policy. Something like:
struct {
union {
uint64_t raw[NR_VMX_MSRS];
struct {
struct {
...
} basic;
struct {
...
} pinbased_ctls;
};
};
} vmx;
This way, the guest_rdmsr() will be far more efficient.
case MSR_IA32_VMX_BASIC ... xxx:
if ( !cpuid->basic.vmx )
goto gp_fault;
*val = dp->vmx.raw[msr - MSR_IA32_VMX_BASIC];
break;
It would probably be worth splitting into a couple of different blocks
based on the different availability checks.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy
2017-10-13 15:16 ` Andrew Cooper
@ 2017-10-16 7:42 ` Sergey Dyasli
2017-10-16 14:01 ` Andrew Cooper
0 siblings, 1 reply; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-16 7:42 UTC (permalink / raw)
To: Andrew Cooper, xen-devel
Cc: Sergey Dyasli, Kevin Tian, jun.nakajima, jbeulich
On Fri, 2017-10-13 at 16:16 +0100, Andrew Cooper wrote:
> On 13/10/17 13:35, Sergey Dyasli wrote:
> > @@ -210,6 +375,255 @@ struct msr_domain_policy
> > bool available; /* This MSR is non-architectural */
> > bool cpuid_faulting;
> > } plaform_info;
> > +
> > + /* 0x00000480 MSR_IA32_VMX_BASIC */
> > + struct {
> > + bool available;
>
> We don't need available bits for any of these MSRs. Their availability
> is cpuid->basic.vmx, and we don't want (let alone need) to duplicate
> information like this.
Andrew,
What do you think about the following way of checking the availability?
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 2527fdd1d1..828f1bb503 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -33,6 +33,43 @@ struct msr_domain_policy __read_mostly raw_msr_domain_policy,
struct msr_vcpu_policy __read_mostly hvm_max_msr_vcpu_policy,
__read_mostly pv_max_msr_vcpu_policy;
+bool msr_vmx_available(const struct domain *d, uint32_t msr)
+{
+ const struct msr_domain_policy *dp = d->arch.msr;
+ bool secondary_available;
+
+ if ( !nestedhvm_enabled(d) || !d->arch.cpuid->basic.vmx )
+ return false;
+
+ secondary_available =
+ dp->vmx_procbased_ctls.u.allowed_1.activate_secondary_controls;
+
+ switch (msr)
+ {
+ case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMCS_ENUM:
+ return true;
+
+ case MSR_IA32_VMX_PROCBASED_CTLS2:
+ return secondary_available;
+
+ case MSR_IA32_VMX_EPT_VPID_CAP:
+ return ( secondary_available &&
+ (dp->vmx_procbased_ctls2.u.allowed_1.enable_ept ||
+ dp->vmx_procbased_ctls2.u.allowed_1.enable_vpid) );
+
+ case MSR_IA32_VMX_TRUE_PINBASED_CTLS ... MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+ return dp->vmx_basic.u.default1_zero;
+
+ case MSR_IA32_VMX_VMFUNC:
+ return ( secondary_available &&
+ dp->vmx_procbased_ctls2.u.allowed_1.enable_vm_functions );
+
+ default: break;
+ }
+
+ return false;
+}
+
static void __init calculate_raw_vmx_policy(struct msr_domain_policy *dp)
{
if ( !cpu_has_vmx )
--
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes
2017-10-13 15:25 ` Andrew Cooper
@ 2017-10-16 7:46 ` Sergey Dyasli
0 siblings, 0 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-16 7:46 UTC (permalink / raw)
To: Andrew Cooper, xen-devel
Cc: Sergey Dyasli, Kevin Tian, jun.nakajima, jbeulich
On Fri, 2017-10-13 at 16:25 +0100, Andrew Cooper wrote:
> On 13/10/17 13:35, Sergey Dyasli wrote:
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 205b4cb685..7e6b15f8d7 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -928,9 +928,8 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
> > X86_CR0_CD | X86_CR0_PG)))
> >
> > /* These bits in CR4 can be set by the guest. */
> > -unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
> > +unsigned long hvm_cr4_domain_valid_bits(const struct domain *d, bool restore)
> > {
> > - const struct domain *d = v->domain;
> > const struct cpuid_policy *p;
> > bool mce, vmxe;
> >
> > @@ -963,6 +962,11 @@ unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
> > (p->feat.pku ? X86_CR4_PKE : 0));
> > }
> >
> > +unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore)
>
> I'd split this change out into a separate patch and change the existing
> guest valid bits to taking a domain *.
>
> It needed to take vcpu in the past because of the old cpuid
> infrastructure, but it doesn't need to any more because of the
> domain-wide struct cpuid policy.
That was one of possibilities so I really needed a mainteiner's opinion
on this. Thanks for providing one!
--
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy
2017-10-16 7:42 ` Sergey Dyasli
@ 2017-10-16 14:01 ` Andrew Cooper
2017-10-18 7:30 ` Sergey Dyasli
0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-10-16 14:01 UTC (permalink / raw)
To: Sergey Dyasli, xen-devel; +Cc: Kevin Tian, jun.nakajima, jbeulich
On 16/10/17 08:42, Sergey Dyasli wrote:
> On Fri, 2017-10-13 at 16:16 +0100, Andrew Cooper wrote:
>> On 13/10/17 13:35, Sergey Dyasli wrote:
>>> @@ -210,6 +375,255 @@ struct msr_domain_policy
>>> bool available; /* This MSR is non-architectural */
>>> bool cpuid_faulting;
>>> } plaform_info;
>>> +
>>> + /* 0x00000480 MSR_IA32_VMX_BASIC */
>>> + struct {
>>> + bool available;
>> We don't need available bits for any of these MSRs. Their availability
>> is cpuid->basic.vmx, and we don't want (let alone need) to duplicate
>> information like this.
> Andrew,
>
> What do you think about the following way of checking the availability?
Preferably not. You are duplicating a lot of information already
available in the guest_{rd,wr}msr(), and visually separating the
availability check from the data returned. Worst however, is that you
risk having a mismatch between the MSR ranges which fall into this
check, and those which are calculated by it.
>
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index 2527fdd1d1..828f1bb503 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -33,6 +33,43 @@ struct msr_domain_policy __read_mostly raw_msr_domain_policy,
> struct msr_vcpu_policy __read_mostly hvm_max_msr_vcpu_policy,
> __read_mostly pv_max_msr_vcpu_policy;
>
> +bool msr_vmx_available(const struct domain *d, uint32_t msr)
> +{
> + const struct msr_domain_policy *dp = d->arch.msr;
> + bool secondary_available;
> +
> + if ( !nestedhvm_enabled(d) || !d->arch.cpuid->basic.vmx )
> + return false;
For now, we do need to double up the d->arch.cpuid->basic.vmx with
nestedhvm_enabled(d), but rest assured that nestedhvm_enabled(d) will be
disappearing in due course. (It exists only because we don't have fine
grain toolstack control of CPUID/MSR values yet).
> +
> + secondary_available =
> + dp->vmx_procbased_ctls.u.allowed_1.activate_secondary_controls;
> +
> + switch (msr)
> + {
> + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMCS_ENUM:
> + return true;
> +
> + case MSR_IA32_VMX_PROCBASED_CTLS2:
> + return secondary_available;
> +
> + case MSR_IA32_VMX_EPT_VPID_CAP:
> + return ( secondary_available &&
> + (dp->vmx_procbased_ctls2.u.allowed_1.enable_ept ||
> + dp->vmx_procbased_ctls2.u.allowed_1.enable_vpid) );
This check can be made more efficient in two ways. First, use a bitwise
rather than logical or, which allows both _ept and _vpid to be tested
with a single instruction, rather than a conditional branch.
Secondly, the CPUID infrastructure has logic to flatten dependency
trees, so we don't need to encode logic paths like this. In practice
however, you only read into the policy for details which match the
dependency tree, so you can drop the secondary_available check here, as
you know that if secondary_available is clear,
dp->vmx_procbased_ctls2.raw will be 0.
~Andrew
> +
> + case MSR_IA32_VMX_TRUE_PINBASED_CTLS ... MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> + return dp->vmx_basic.u.default1_zero;
> +
> + case MSR_IA32_VMX_VMFUNC:
> + return ( secondary_available &&
> + dp->vmx_procbased_ctls2.u.allowed_1.enable_vm_functions );
> +
> + default: break;
> + }
> +
> + return false;
> +}
> +
> static void __init calculate_raw_vmx_policy(struct msr_domain_policy *dp)
> {
> if ( !cpu_has_vmx )
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr()
2017-10-13 15:38 ` Andrew Cooper
@ 2017-10-16 14:50 ` Sergey Dyasli
0 siblings, 0 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-16 14:50 UTC (permalink / raw)
To: Andrew Cooper, xen-devel
Cc: Sergey Dyasli, Kevin Tian, jun.nakajima, jbeulich
On Fri, 2017-10-13 at 16:38 +0100, Andrew Cooper wrote:
> On 13/10/17 13:35, Sergey Dyasli wrote:
> > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> > index a22e3dfaf2..2527fdd1d1 100644
> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -426,6 +426,13 @@ int init_vcpu_msr_policy(struct vcpu *v)
> > return 0;
> > }
> >
> > +#define vmx_guest_rdmsr(dp, name, msr) \
> > + case name: \
> > + if ( !dp->msr.available ) \
> > + goto gp_fault; \
> > + *val = dp->msr.u.raw; \
> > + break;
>
> Eww :(
>
> For blocks of MSRs, it would be far better to go with the same structure
> as the cpuid policy. Something like:
>
> struct {
> union {
> uint64_t raw[NR_VMX_MSRS];
> struct {
> struct {
> ...
> } basic;
> struct {
> ...
> } pinbased_ctls;
> };
> };
> } vmx;
>
> This way, the guest_rdmsr() will be far more efficient.
>
> case MSR_IA32_VMX_BASIC ... xxx:
> if ( !cpuid->basic.vmx )
> goto gp_fault;
> *val = dp->vmx.raw[msr - MSR_IA32_VMX_BASIC];
> break;
>
> It would probably be worth splitting into a couple of different blocks
> based on the different availability checks.
I can understand an argument about removing available flags and getting
smaller msr policy's struct, but I fail to see how a big number of case
statements will make guest_rdmsr() inefficient. I expect a switch
statement to have O(log(N)) complexity which means it doesn't really
matter how many case statements there are.
Do you have some other performance concerns?
--
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy
2017-10-16 14:01 ` Andrew Cooper
@ 2017-10-18 7:30 ` Sergey Dyasli
0 siblings, 0 replies; 15+ messages in thread
From: Sergey Dyasli @ 2017-10-18 7:30 UTC (permalink / raw)
To: Andrew Cooper, xen-devel
Cc: Sergey Dyasli, Kevin Tian, jun.nakajima, jbeulich
On Mon, 2017-10-16 at 15:01 +0100, Andrew Cooper wrote:
> On 16/10/17 08:42, Sergey Dyasli wrote:
> > +
> > + secondary_available =
> > + dp->vmx_procbased_ctls.u.allowed_1.activate_secondary_controls;
> > +
> > + switch (msr)
> > + {
> > + case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMCS_ENUM:
> > + return true;
> > +
> > + case MSR_IA32_VMX_PROCBASED_CTLS2:
> > + return secondary_available;
> > +
> > + case MSR_IA32_VMX_EPT_VPID_CAP:
> > + return ( secondary_available &&
> > + (dp->vmx_procbased_ctls2.u.allowed_1.enable_ept ||
> > + dp->vmx_procbased_ctls2.u.allowed_1.enable_vpid) );
>
> This check can be made more efficient in two ways. First, use a bitwise
> rather than logical or, which allows both _ept and _vpid to be tested
> with a single instruction, rather than a conditional branch.
But it's compiler's job to optimize conditions like that.
I'm getting the following asm:
if ( dp->vmx_procbased_ctls2.allowed_1.enable_ept ||
ffff82d08027bc3d: 48 c1 e8 20 shr $0x20,%rax
ffff82d08027bc41: a8 22 test $0x22,%al
ffff82d08027bc43: 74 0d je ffff82d08027bc52 <recalculate_domain_vmx_msr_policy+0x196>
And "test $0x22" is exactly the test for "enable_ept || enable_vpid"
with a single instruction.
--
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2017-10-18 7:30 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-13 12:35 [PATCH v3 0/6] VMX MSRs policy for Nested Virt: part 1 Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 1/6] x86/msr: add Raw and Host domain policies Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 2/6] x86/msr: add VMX MSRs into struct msr_domain_policy Sergey Dyasli
2017-10-13 15:16 ` Andrew Cooper
2017-10-16 7:42 ` Sergey Dyasli
2017-10-16 14:01 ` Andrew Cooper
2017-10-18 7:30 ` Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 3/6] x86/msr: read VMX MSRs values into Raw policy Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 4/6] x86/msr: add VMX MSRs into HVM_max domain policy Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 5/6] x86/msr: update domain policy on CPUID policy changes Sergey Dyasli
2017-10-13 15:25 ` Andrew Cooper
2017-10-16 7:46 ` Sergey Dyasli
2017-10-13 12:35 ` [PATCH v3 6/6] x86/msr: handle VMX MSRs with guest_rd/wrmsr() Sergey Dyasli
2017-10-13 15:38 ` Andrew Cooper
2017-10-16 14:50 ` Sergey Dyasli
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.