* [PATCH v2 0/4] KVM: VMX: enable LBR virtualization
@ 2015-10-23 9:15 Jian Zhou
2015-10-23 9:15 ` [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs Jian Zhou
` (5 more replies)
0 siblings, 6 replies; 16+ messages in thread
From: Jian Zhou @ 2015-10-23 9:15 UTC (permalink / raw)
To: kvm, pbonzini, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
Changelog in v2:
(1) move the implementation into vmx.c
(2) migraton is supported
(3) add arrays in kvm_vcpu_arch struct to save/restore
LBR MSRs at vm exit/entry time.
(4) add a parameter of kvm_intel module to permanently
disable LBRV
(5) table of supported CPUs is reorgnized, LBRV
can be enabled or not according to the guest CPUID
Jian Zhou (4):
KVM: X86: Add arrays to save/restore LBR MSRs
KVM: X86: LBR MSRs of supported CPU types
KVM: X86: Migration is supported
KVM: VMX: details of LBR virtualization implementation
arch/x86/include/asm/kvm_host.h | 26 ++++-
arch/x86/include/asm/msr-index.h | 26 ++++-
arch/x86/kvm/vmx.c | 245 +++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/x86.c | 88 ++++++++++++--
4 files changed, 366 insertions(+), 19 deletions(-)
--
1.7.12.4
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
@ 2015-10-23 9:15 ` Jian Zhou
2015-10-23 12:14 ` kbuild test robot
2015-10-23 9:15 ` [PATCH v2 2/4] KVM: X86: LBR MSRs of supported CPU types Jian Zhou
` (4 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Jian Zhou @ 2015-10-23 9:15 UTC (permalink / raw)
To: kvm, pbonzini, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
Add arrays in kvm_vcpu_arch struct to save/restore
LBR MSRs at vm exit/entry time.
Add new hooks to set/get DEBUGCTLMSR and LBR MSRs.
Signed-off-by: Jian Zhou <jianjay.zhou@huawei.com>
Signed-off-by: Stephen He <herongguang.he@huawei.com>
---
arch/x86/include/asm/kvm_host.h | 26 ++++++++++++++++++++------
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3a36ee7..dc2c120 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -376,6 +376,12 @@ struct kvm_vcpu_hv {
u64 hv_vapic;
};
+struct msr_data {
+ bool host_initiated;
+ u32 index;
+ u64 data;
+};
+
struct kvm_vcpu_arch {
/*
* rip and regs accesses must go through
@@ -516,6 +522,15 @@ struct kvm_vcpu_arch {
unsigned long eff_db[KVM_NR_DB_REGS];
unsigned long guest_debug_dr7;
+ int lbr_status;
+ int lbr_used;
+
+ struct lbr_msr {
+ unsigned nr;
+ struct msr_data guest[MAX_NUM_LBR_MSRS];
+ struct msr_data host[MAX_NUM_LBR_MSRS];
+ }lbr_msr;
+
u64 mcg_cap;
u64 mcg_status;
u64 mcg_ctl;
@@ -728,12 +743,6 @@ struct kvm_vcpu_stat {
struct x86_instruction_info;
-struct msr_data {
- bool host_initiated;
- u32 index;
- u64 data;
-};
-
struct kvm_lapic_irq {
u32 vector;
u16 delivery_mode;
@@ -887,6 +896,11 @@ struct kvm_x86_ops {
gfn_t offset, unsigned long mask);
/* pmu operations of sub-arch */
const struct kvm_pmu_ops *pmu_ops;
+
+ int (*set_debugctlmsr)(struct kvm_vcpu *vcpu, u64 value);
+ u64 (*get_debugctlmsr)(void);
+ void (*set_lbr_msr)(struct kvm_vcpu *vcpu, u32 msr, u64 data);
+ u64 (*get_lbr_msr)(struct kvm_vcpu *vcpu, u32 msr);
};
struct kvm_arch_async_pf {
--
1.7.12.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v2 2/4] KVM: X86: LBR MSRs of supported CPU types
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
2015-10-23 9:15 ` [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs Jian Zhou
@ 2015-10-23 9:15 ` Jian Zhou
2015-10-23 9:15 ` [PATCH v2 3/4] KVM: X86: Migration is supported Jian Zhou
` (3 subsequent siblings)
5 siblings, 0 replies; 16+ messages in thread
From: Jian Zhou @ 2015-10-23 9:15 UTC (permalink / raw)
To: kvm, pbonzini, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
Macros about LBR MSRs.
Signed-off-by: Jian Zhou <jianjay.zhou@huawei.com>
Signed-off-by: Stephen He <herongguang.he@huawei.com>
---
arch/x86/include/asm/msr-index.h | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index b98b471..2afcacd 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -68,10 +68,32 @@
#define MSR_LBR_SELECT 0x000001c8
#define MSR_LBR_TOS 0x000001c9
+#define MSR_LBR_CORE_FROM 0x00000040
+#define MSR_LBR_CORE_TO 0x00000060
+#define MAX_NUM_LBR_MSRS 128
+/* Pentium4/Xeon(based on NetBurst) LBR */
+#define MSR_PENTIUM4_LER_FROM_LIP 0x000001d7
+#define MSR_PENTIUM4_LER_TO_LIP 0x000001d8
+#define MSR_PENTIUM4_LBR_TOS 0x000001da
+#define MSR_LBR_PENTIUM4_FROM 0x00000680
+#define MSR_LBR_PENTIUM4_TO 0x000006c0
+#define SIZE_PENTIUM4_LBR_STACK 16
+/* Core2 LBR */
+#define MSR_LBR_CORE2_FROM MSR_LBR_CORE_FROM
+#define MSR_LBR_CORE2_TO MSR_LBR_CORE_TO
+#define SIZE_CORE2_LBR_STACK 4
+/* Atom LBR */
+#define MSR_LBR_ATOM_FROM MSR_LBR_CORE_FROM
+#define MSR_LBR_ATOM_TO MSR_LBR_CORE_TO
+#define SIZE_ATOM_LBR_STACK 8
+/* Nehalem LBR */
#define MSR_LBR_NHM_FROM 0x00000680
#define MSR_LBR_NHM_TO 0x000006c0
-#define MSR_LBR_CORE_FROM 0x00000040
-#define MSR_LBR_CORE_TO 0x00000060
+#define SIZE_NHM_LBR_STACK 16
+/* Skylake LBR */
+#define MSR_LBR_SKYLAKE_FROM MSR_LBR_NHM_FROM
+#define MSR_LBR_SKYLAKE_TO MSR_LBR_NHM_TO
+#define SIZE_SKYLAKE_LBR_STACK 32
#define MSR_LBR_INFO_0 0x00000dc0 /* ... 0xddf for _31 */
#define LBR_INFO_MISPRED BIT_ULL(63)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v2 3/4] KVM: X86: Migration is supported
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
2015-10-23 9:15 ` [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs Jian Zhou
2015-10-23 9:15 ` [PATCH v2 2/4] KVM: X86: LBR MSRs of supported CPU types Jian Zhou
@ 2015-10-23 9:15 ` Jian Zhou
2015-11-11 15:15 ` Paolo Bonzini
2015-10-23 9:15 ` [PATCH v2 4/4] KVM: VMX: details of LBR virtualization implementation Jian Zhou
` (2 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Jian Zhou @ 2015-10-23 9:15 UTC (permalink / raw)
To: kvm, pbonzini, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
Supported bits of MSR_IA32_DEBUGCTLMSR are DEBUGCTLMSR_LBR(bit 0),
DEBUGCTLMSR_BTF(bit 1) and DEBUGCTLMSR_FREEZE_LBRS_ON_PMI(bit 11).
Qemu can get/set contents of LBR MSRs and LBR status in order to
support migration.
Signed-off-by: Jian Zhou <jianjay.zhou@huawei.com>
Signed-off-by: Stephen He <herongguang.he@huawei.com>
---
arch/x86/kvm/x86.c | 88 +++++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 77 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9a9a198..a3c72db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -136,6 +136,8 @@ struct kvm_shared_msrs {
static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
static struct kvm_shared_msrs __percpu *shared_msrs;
+#define MSR_LBR_STATUS 0xd6
+
struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "pf_fixed", VCPU_STAT(pf_fixed) },
{ "pf_guest", VCPU_STAT(pf_guest) },
@@ -1917,6 +1919,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
bool pr = false;
u32 msr = msr_info->index;
u64 data = msr_info->data;
+ u64 supported = 0;
switch (msr) {
case MSR_AMD64_NB_CFG:
@@ -1948,16 +1951,25 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
}
break;
case MSR_IA32_DEBUGCTLMSR:
- if (!data) {
- /* We support the non-activated case already */
- break;
- } else if (data & ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF)) {
- /* Values other than LBR and BTF are vendor-specific,
- thus reserved and should throw a #GP */
+ supported = DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF |
+ DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
+
+ if (data & ~supported) {
+ /*
+ * Values other than LBR/BTF/FREEZE_LBRS_ON_PMI
+ * are not supported, thus reserved and should throw a #GP
+ */
+ vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
+ __func__, data);
return 1;
}
- vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
- __func__, data);
+ if (kvm_x86_ops->set_debugctlmsr) {
+ if (kvm_x86_ops->set_debugctlmsr(vcpu, data))
+ return 1;
+ }
+ else
+ return 1;
+
break;
case 0x200 ... 0x2ff:
return kvm_mtrr_set_msr(vcpu, msr, data);
@@ -2078,6 +2090,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
vcpu_unimpl(vcpu, "disabled perfctr wrmsr: "
"0x%x data 0x%llx\n", msr, data);
break;
+ case MSR_LBR_STATUS:
+ if (kvm_x86_ops->set_debugctlmsr) {
+ vcpu->arch.lbr_status = (data == 0) ? 0 : 1;
+ if (data)
+ kvm_x86_ops->set_debugctlmsr(vcpu,
+ DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+ } else
+ vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
+ "0x%x data 0x%llx\n", msr, data);
+ break;
+ case MSR_LBR_SELECT:
+ case MSR_LBR_TOS:
+ case MSR_PENTIUM4_LER_FROM_LIP:
+ case MSR_PENTIUM4_LER_TO_LIP:
+ case MSR_PENTIUM4_LBR_TOS:
+ case MSR_IA32_LASTINTFROMIP:
+ case MSR_IA32_LASTINTTOIP:
+ case MSR_LBR_CORE2_FROM ... MSR_LBR_CORE2_FROM + 0x7:
+ case MSR_LBR_CORE2_TO ... MSR_LBR_CORE2_TO + 0x7:
+ case MSR_LBR_NHM_FROM ... MSR_LBR_NHM_FROM + 0x1f:
+ case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 0x1f:
+ if (kvm_x86_ops->set_lbr_msr)
+ kvm_x86_ops->set_lbr_msr(vcpu, msr, data);
+ else
+ vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
+ "0x%x data 0x%llx\n", msr, data);
+ break;
case MSR_K7_CLK_CTL:
/*
* Ignore all writes to this no longer documented MSR.
@@ -2178,13 +2217,16 @@ static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
switch (msr_info->index) {
+ case MSR_IA32_DEBUGCTLMSR:
+ if (kvm_x86_ops->get_debugctlmsr)
+ msr_info->data = kvm_x86_ops->get_debugctlmsr();
+ else
+ msr_info->data = 0;
+ break;
case MSR_IA32_PLATFORM_ID:
case MSR_IA32_EBL_CR_POWERON:
- case MSR_IA32_DEBUGCTLMSR:
case MSR_IA32_LASTBRANCHFROMIP:
case MSR_IA32_LASTBRANCHTOIP:
- case MSR_IA32_LASTINTFROMIP:
- case MSR_IA32_LASTINTTOIP:
case MSR_K8_SYSCFG:
case MSR_K8_TSEG_ADDR:
case MSR_K8_TSEG_MASK:
@@ -2204,6 +2246,26 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return kvm_pmu_get_msr(vcpu, msr_info->index, &msr_info->data);
msr_info->data = 0;
break;
+ case MSR_LBR_STATUS:
+ msr_info->data = vcpu->arch.lbr_status;
+ break;
+ case MSR_LBR_SELECT:
+ case MSR_LBR_TOS:
+ case MSR_PENTIUM4_LER_FROM_LIP:
+ case MSR_PENTIUM4_LER_TO_LIP:
+ case MSR_PENTIUM4_LBR_TOS:
+ case MSR_IA32_LASTINTFROMIP:
+ case MSR_IA32_LASTINTTOIP:
+ case MSR_LBR_CORE2_FROM ... MSR_LBR_CORE2_FROM + 0x7:
+ case MSR_LBR_CORE2_TO ... MSR_LBR_CORE2_TO + 0x7:
+ case MSR_LBR_SKYLAKE_FROM ... MSR_LBR_SKYLAKE_FROM + 0x1f:
+ case MSR_LBR_SKYLAKE_TO ... MSR_LBR_SKYLAKE_TO + 0x1f:
+ if (kvm_x86_ops->get_lbr_msr)
+ msr_info->data = kvm_x86_ops->get_lbr_msr(vcpu,
+ msr_info->index);
+ else
+ msr_info->data = 0;
+ break;
case MSR_IA32_UCODE_REV:
msr_info->data = 0x100000000ULL;
break;
@@ -7376,6 +7438,10 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
kvm_async_pf_hash_reset(vcpu);
kvm_pmu_init(vcpu);
+ vcpu->arch.lbr_status = 0;
+ vcpu->arch.lbr_used = 0;
+ vcpu->arch.lbr_msr.nr = 0;
+
return 0;
fail_free_mce_banks:
--
1.7.12.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v2 4/4] KVM: VMX: details of LBR virtualization implementation
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
` (2 preceding siblings ...)
2015-10-23 9:15 ` [PATCH v2 3/4] KVM: X86: Migration is supported Jian Zhou
@ 2015-10-23 9:15 ` Jian Zhou
2015-11-09 1:33 ` [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
2015-11-11 15:23 ` Paolo Bonzini
5 siblings, 0 replies; 16+ messages in thread
From: Jian Zhou @ 2015-10-23 9:15 UTC (permalink / raw)
To: kvm, pbonzini, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
Using msr intercept bitmap and arrays(save/restore LBR MSRs)
in kvm_vcpu_arch struct to support LBR virtualization.
Add a parameter of kvm_intel module to permanently disable
LBRV.
Reorgnized the table of supported CPUs, LBRV can be enabled
or not according to the guest CPUID.
Signed-off-by: Jian Zhou <jianjay.zhou@huawei.com>
Signed-off-by: Stephen He <herongguang.he@huawei.com>
---
arch/x86/kvm/vmx.c | 245 +++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 245 insertions(+)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6a8bc64..3ab890d 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -90,6 +90,9 @@ module_param(fasteoi, bool, S_IRUGO);
static bool __read_mostly enable_apicv = 1;
module_param(enable_apicv, bool, S_IRUGO);
+static bool __read_mostly lbrv = 1;
+module_param(lbrv, bool, S_IRUGO);
+
static bool __read_mostly enable_shadow_vmcs = 1;
module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
/*
@@ -4323,6 +4326,21 @@ static void vmx_disable_intercept_msr_write_x2apic(u32 msr)
msr, MSR_TYPE_W);
}
+static void vmx_disable_intercept_guest_msr(struct kvm_vcpu *vcpu, u32 msr)
+{
+ if (irqchip_in_kernel(vcpu->kvm) &&
+ apic_x2apic_mode(vcpu->arch.apic)) {
+ vmx_disable_intercept_msr_read_x2apic(msr);
+ vmx_disable_intercept_msr_write_x2apic(msr);
+ }
+ else {
+ if (is_long_mode(vcpu))
+ vmx_disable_intercept_for_msr(msr, true);
+ else
+ vmx_disable_intercept_for_msr(msr, false);
+ }
+}
+
static int vmx_vm_has_apicv(struct kvm *kvm)
{
return enable_apicv && irqchip_in_kernel(kvm);
@@ -6037,6 +6055,13 @@ static __init int hardware_setup(void)
kvm_x86_ops->sync_pir_to_irr = vmx_sync_pir_to_irr_dummy;
}
+ if (!lbrv) {
+ kvm_x86_ops->set_debugctlmsr = NULL;
+ kvm_x86_ops->get_debugctlmsr = NULL;
+ kvm_x86_ops->set_lbr_msr = NULL;
+ kvm_x86_ops->get_lbr_msr = NULL;
+ }
+
vmx_disable_intercept_for_msr(MSR_FS_BASE, false);
vmx_disable_intercept_for_msr(MSR_GS_BASE, false);
vmx_disable_intercept_for_msr(MSR_KERNEL_GS_BASE, true);
@@ -8258,6 +8283,215 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
msrs[i].host);
}
+struct lbr_info {
+ u32 base;
+ u8 count;
+} pentium4_lbr[] = {
+ { MSR_LBR_SELECT, 1 },
+ { MSR_PENTIUM4_LER_FROM_LIP, 1 },
+ { MSR_PENTIUM4_LER_TO_LIP, 1 },
+ { MSR_PENTIUM4_LBR_TOS, 1 },
+ { MSR_LBR_PENTIUM4_FROM, SIZE_PENTIUM4_LBR_STACK },
+ { MSR_LBR_PENTIUM4_TO, SIZE_PENTIUM4_LBR_STACK },
+ { 0, 0 }
+}, core2_lbr[] = {
+ { MSR_LBR_SELECT, 1 },
+ { MSR_IA32_LASTINTFROMIP, 1 },
+ { MSR_IA32_LASTINTTOIP, 1 },
+ { MSR_LBR_TOS, 1 },
+ { MSR_LBR_CORE2_FROM, SIZE_CORE2_LBR_STACK },
+ { MSR_LBR_CORE2_TO, SIZE_CORE2_LBR_STACK },
+ { 0, 0 }
+}, atom_lbr[] = {
+ { MSR_LBR_SELECT, 1 },
+ { MSR_IA32_LASTINTFROMIP, 1 },
+ { MSR_IA32_LASTINTTOIP, 1 },
+ { MSR_LBR_TOS, 1 },
+ { MSR_LBR_ATOM_FROM, SIZE_ATOM_LBR_STACK },
+ { MSR_LBR_ATOM_TO, SIZE_ATOM_LBR_STACK },
+ { 0, 0 }
+}, nehalem_lbr[] = {
+ { MSR_LBR_SELECT, 1 },
+ { MSR_IA32_LASTINTFROMIP, 1 },
+ { MSR_IA32_LASTINTTOIP, 1 },
+ { MSR_LBR_TOS, 1 },
+ { MSR_LBR_NHM_FROM, SIZE_NHM_LBR_STACK },
+ { MSR_LBR_NHM_TO, SIZE_NHM_LBR_STACK },
+ { 0, 0 }
+}, skylake_lbr[] = {
+ { MSR_LBR_SELECT, 1 },
+ { MSR_IA32_LASTINTFROMIP, 1 },
+ { MSR_IA32_LASTINTTOIP, 1 },
+ { MSR_LBR_TOS, 1 },
+ { MSR_LBR_SKYLAKE_FROM, SIZE_SKYLAKE_LBR_STACK },
+ { MSR_LBR_SKYLAKE_TO, SIZE_SKYLAKE_LBR_STACK },
+ { 0, 0}
+};
+
+static const struct lbr_info *last_branch_msr_get(struct kvm_vcpu *vcpu)
+{
+ struct kvm_cpuid_entry2 *best = kvm_find_cpuid_entry(vcpu, 1, 0);
+ u32 eax = best->eax;
+ u8 family = (eax >> 8) & 0xf;
+ u8 model = (eax >> 4) & 0xf;
+
+ if (family == 15)
+ family += (eax >> 20) & 0xff;
+ if (family >= 6)
+ model += ((eax >> 16) & 0xf) << 4;
+
+ if (family == 6)
+ {
+ switch (model)
+ {
+ case 15: /* 65nm Core2 "Merom" */
+ case 22: /* 65nm Core2 "Merom-L" */
+ case 23: /* 45nm Core2 "Penryn" */
+ case 29: /* 45nm Core2 "Dunnington (MP) */
+ return core2_lbr;
+ break;
+ case 28: /* 45nm Atom "Pineview" */
+ case 38: /* 45nm Atom "Lincroft" */
+ case 39: /* 32nm Atom "Penwell" */
+ case 53: /* 32nm Atom "Cloverview" */
+ case 54: /* 32nm Atom "Cedarview" */
+ case 55: /* 22nm Atom "Silvermont" */
+ case 76: /* 14nm Atom "Airmont" */
+ case 77: /* 22nm Atom "Silvermont Avoton/Rangely" */
+ return atom_lbr;
+ break;
+ case 30: /* 45nm Nehalem */
+ case 26: /* 45nm Nehalem-EP */
+ case 46: /* 45nm Nehalem-EX */
+ case 37: /* 32nm Westmere */
+ case 44: /* 32nm Westmere-EP */
+ case 47: /* 32nm Westmere-EX */
+ case 42: /* 32nm SandyBridge */
+ case 45: /* 32nm SandyBridge-E/EN/EP */
+ case 58: /* 22nm IvyBridge */
+ case 62: /* 22nm IvyBridge-EP/EX */
+ case 60: /* 22nm Haswell Core */
+ case 63: /* 22nm Haswell Server */
+ case 69: /* 22nm Haswell ULT */
+ case 70: /* 22nm Haswell + GT3e */
+ case 61: /* 14nm Broadwell Core-M */
+ case 86: /* 14nm Broadwell Xeon D */
+ case 71: /* 14nm Broadwell + GT3e */
+ case 79: /* 14nm Broadwell Server */
+ return nehalem_lbr;
+ break;
+ case 78: /* 14nm Skylake Mobile */
+ case 94: /* 14nm Skylake Desktop */
+ return skylake_lbr;
+ break;
+ }
+ }
+ if (family == 15) {
+ switch (model)
+ {
+ /* Pentium4/Xeon(based on NetBurst)*/
+ case 3:
+ case 4:
+ case 6:
+ return pentium4_lbr;
+ break;
+ }
+ }
+
+ return NULL;
+}
+
+static int vmx_enable_lbrv(struct kvm_vcpu *vcpu)
+{
+ int i;
+ const struct lbr_info *lbr = last_branch_msr_get(vcpu);
+ struct lbr_msr *m = &(vcpu->arch.lbr_msr);
+
+ if (lbr == NULL)
+ return 1;
+
+ if (vcpu->arch.lbr_used) {
+ vcpu->arch.lbr_status = 1;
+ return 0;
+ }
+
+ for (; lbr->count; lbr++)
+ for (i = 0; (i < lbr->count); i++) {
+ m->host[m->nr].index = lbr->base + i;
+ m->host[m->nr].data = 0;
+ m->guest[m->nr].index = lbr->base + i;
+ m->guest[m->nr].data = 0;
+ m->nr++;
+
+ vmx_disable_intercept_guest_msr(vcpu, lbr->base + i);
+ }
+
+ vcpu->arch.lbr_status = 1;
+ vcpu->arch.lbr_used = 1;
+
+ return 0;
+}
+
+static int vmx_set_debugctlmsr(struct kvm_vcpu *vcpu, u64 value)
+{
+ if (value & DEBUGCTLMSR_LBR) {
+ if (vmx_enable_lbrv(vcpu))
+ return 1;
+ } else
+ vcpu->arch.lbr_status = 0;
+
+ vmcs_write64(GUEST_IA32_DEBUGCTL, value);
+
+ return 0;
+}
+
+static u64 vmx_get_debugctlmsr(void)
+{
+ return vmcs_read64(GUEST_IA32_DEBUGCTL);
+}
+
+static void vmx_set_lbr_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data)
+{
+ unsigned i;
+ struct lbr_msr *m = &(vcpu->arch.lbr_msr);
+
+ for (i = 0; i < m->nr; ++i)
+ if (m->guest[i].index == msr)
+ m->guest[i].data = data;
+}
+
+static u64 vmx_get_lbr_msr(struct kvm_vcpu *vcpu, u32 msr)
+{
+ unsigned i;
+ struct lbr_msr *m = &(vcpu->arch.lbr_msr);
+
+ for (i = 0; i < m->nr; ++i)
+ if (m->guest[i].index == msr)
+ break;
+
+ if (i < m->nr)
+ return m->guest[i].data;
+ else
+ return 0;
+}
+
+static void switch_lbr_msrs(struct kvm_vcpu *vcpu, int vm_exit)
+{
+ int i;
+ struct lbr_msr *m = &(vcpu->arch.lbr_msr);
+
+ for (i = 0; i < m->nr; ++i) {
+ if (vm_exit) {
+ rdmsrl(m->guest[i].index, m->guest[i].data);
+ wrmsrl(m->host[i].index, m->host[i].data);
+ }
+ else {
+ rdmsrl(m->host[i].index, m->host[i].data);
+ wrmsrl(m->guest[i].index, m->guest[i].data);
+ }
+ }
+}
+
static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -8304,6 +8538,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
atomic_switch_perf_msrs(vmx);
debugctlmsr = get_debugctlmsr();
+ if (vcpu->arch.lbr_used)
+ switch_lbr_msrs(vcpu, 0);
+
vmx->__launched = vmx->loaded_vmcs->launched;
asm(
/* Store host registers */
@@ -8410,6 +8647,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
#endif
);
+ if (vcpu->arch.lbr_used)
+ switch_lbr_msrs(vcpu, 1);
+
/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
if (debugctlmsr)
update_debugctlmsr(debugctlmsr);
@@ -10395,6 +10635,11 @@ static struct kvm_x86_ops vmx_x86_ops = {
.enable_log_dirty_pt_masked = vmx_enable_log_dirty_pt_masked,
.pmu_ops = &intel_pmu_ops,
+
+ .set_debugctlmsr = vmx_set_debugctlmsr,
+ .get_debugctlmsr = vmx_get_debugctlmsr,
+ .set_lbr_msr = vmx_set_lbr_msr,
+ .get_lbr_msr = vmx_get_lbr_msr,
};
static int __init vmx_init(void)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs
2015-10-23 9:15 ` [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs Jian Zhou
@ 2015-10-23 12:14 ` kbuild test robot
0 siblings, 0 replies; 16+ messages in thread
From: kbuild test robot @ 2015-10-23 12:14 UTC (permalink / raw)
To: Jian Zhou
Cc: kbuild-all, kvm, pbonzini, gleb, tglx, mingo, hpa, x86,
linux-kernel, herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
[-- Attachment #1: Type: text/plain, Size: 1383 bytes --]
Hi Jian,
[auto build test ERROR on v4.3-rc6 -- if it's inappropriate base, please suggest rules for selecting the more suitable base]
url: https://github.com/0day-ci/linux/commits/Jian-Zhou/KVM-X86-Add-arrays-to-save-restore-LBR-MSRs/20151023-172601
config: x86_64-lkp (attached as .config)
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64
Note: the linux-review/Jian-Zhou/KVM-X86-Add-arrays-to-save-restore-LBR-MSRs/20151023-172601 HEAD d402c03a709c1dff60e2800becbafaf3b2d86dcd builds fine.
It only hurts bisectibility.
All errors (new ones prefixed by >>):
In file included from include/linux/kvm_host.h:34:0,
from arch/x86/kvm/../../../virt/kvm/kvm_main.c:21:
>> arch/x86/include/asm/kvm_host.h:530:25: error: 'MAX_NUM_LBR_MSRS' undeclared here (not in a function)
struct msr_data guest[MAX_NUM_LBR_MSRS];
^
vim +/MAX_NUM_LBR_MSRS +530 arch/x86/include/asm/kvm_host.h
524
525 int lbr_status;
526 int lbr_used;
527
528 struct lbr_msr {
529 unsigned nr;
> 530 struct msr_data guest[MAX_NUM_LBR_MSRS];
531 struct msr_data host[MAX_NUM_LBR_MSRS];
532 }lbr_msr;
533
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 21911 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/4] KVM: VMX: enable LBR virtualization
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
` (3 preceding siblings ...)
2015-10-23 9:15 ` [PATCH v2 4/4] KVM: VMX: details of LBR virtualization implementation Jian Zhou
@ 2015-11-09 1:33 ` Jian Zhou
2015-11-09 9:06 ` Paolo Bonzini
2015-11-11 15:23 ` Paolo Bonzini
5 siblings, 1 reply; 16+ messages in thread
From: Jian Zhou @ 2015-11-09 1:33 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: herongguang.he, weidong.huang, peter.huangpeng
Hi Paolo,
May I ask that any suggestion about the version 2 of VMX LBRV?
This version is updated following your advices in version 1.
BTW the kvm-unit-test for this feature has sent too, and I
have tested the CPUs emulated by QEMU.
Thanks,
Jian
On 2015/10/23 17:15, Jian Zhou wrote:
> Changelog in v2:
> (1) move the implementation into vmx.c
> (2) migraton is supported
> (3) add arrays in kvm_vcpu_arch struct to save/restore
> LBR MSRs at vm exit/entry time.
> (4) add a parameter of kvm_intel module to permanently
> disable LBRV
> (5) table of supported CPUs is reorgnized, LBRV
> can be enabled or not according to the guest CPUID
>
> Jian Zhou (4):
> KVM: X86: Add arrays to save/restore LBR MSRs
> KVM: X86: LBR MSRs of supported CPU types
> KVM: X86: Migration is supported
> KVM: VMX: details of LBR virtualization implementation
>
> arch/x86/include/asm/kvm_host.h | 26 ++++-
> arch/x86/include/asm/msr-index.h | 26 ++++-
> arch/x86/kvm/vmx.c | 245 +++++++++++++++++++++++++++++++++++++++
> arch/x86/kvm/x86.c | 88 ++++++++++++--
> 4 files changed, 366 insertions(+), 19 deletions(-)
>
> --
> 1.7.12.4
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/4] KVM: VMX: enable LBR virtualization
2015-11-09 1:33 ` [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
@ 2015-11-09 9:06 ` Paolo Bonzini
2015-11-09 9:26 ` Jian Zhou
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-11-09 9:06 UTC (permalink / raw)
To: Jian Zhou, kvm; +Cc: herongguang.he, weidong.huang, peter.huangpeng
On 09/11/2015 02:33, Jian Zhou wrote:
> Hi Paolo,
>
> May I ask that any suggestion about the version 2 of VMX LBRV?
> This version is updated following your advices in version 1.
> BTW the kvm-unit-test for this feature has sent too, and I
> have tested the CPUs emulated by QEMU.
Hi,
since these patches will not be part of Linux 4.4, I will review them
after the end of the merge window (or at least, after I've finished
sending material for 4.4).
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/4] KVM: VMX: enable LBR virtualization
2015-11-09 9:06 ` Paolo Bonzini
@ 2015-11-09 9:26 ` Jian Zhou
0 siblings, 0 replies; 16+ messages in thread
From: Jian Zhou @ 2015-11-09 9:26 UTC (permalink / raw)
To: Paolo Bonzini, kvm; +Cc: herongguang.he, weidong.huang, peter.huangpeng
On 2015/11/9 17:06, Paolo Bonzini wrote:
>
>
> On 09/11/2015 02:33, Jian Zhou wrote:
>> Hi Paolo,
>>
>> May I ask that any suggestion about the version 2 of VMX LBRV?
>> This version is updated following your advices in version 1.
>> BTW the kvm-unit-test for this feature has sent too, and I
>> have tested the CPUs emulated by QEMU.
>
> Hi,
>
> since these patches will not be part of Linux 4.4, I will review them
> after the end of the merge window (or at least, after I've finished
> sending material for 4.4).
Thanks for your consideration.
Jian
>
> Paolo
>
> .
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 3/4] KVM: X86: Migration is supported
2015-10-23 9:15 ` [PATCH v2 3/4] KVM: X86: Migration is supported Jian Zhou
@ 2015-11-11 15:15 ` Paolo Bonzini
2015-11-12 7:06 ` Jian Zhou
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-11-11 15:15 UTC (permalink / raw)
To: Jian Zhou, kvm, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang, peter.huangpeng
On 23/10/2015 11:15, Jian Zhou wrote:
> data *msr_info)
> }
> break;
> case MSR_IA32_DEBUGCTLMSR:
> - if (!data) {
> - /* We support the non-activated case already */
> - break;
> - } else if (data & ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF)) {
> - /* Values other than LBR and BTF are vendor-specific,
> - thus reserved and should throw a #GP */
> + supported = DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF |
> + DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
> +
> + if (data & ~supported) {
> + /*
> + * Values other than LBR/BTF/FREEZE_LBRS_ON_PMI
> + * are not supported, thus reserved and should throw a #GP
> + */
> + vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
> + __func__, data);
> return 1;
> }
> - vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
> - __func__, data);
> + if (kvm_x86_ops->set_debugctlmsr) {
> + if (kvm_x86_ops->set_debugctlmsr(vcpu, data))
> + return 1;
> + }
> + else
> + return 1;
> +
> break;
> case 0x200 ... 0x2ff:
> return kvm_mtrr_set_msr(vcpu, msr, data);
> @@ -2078,6 +2090,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> vcpu_unimpl(vcpu, "disabled perfctr wrmsr: "
> "0x%x data 0x%llx\n", msr, data);
> break;
> + case MSR_LBR_STATUS:
> + if (kvm_x86_ops->set_debugctlmsr) {
> + vcpu->arch.lbr_status = (data == 0) ? 0 : 1;
> + if (data)
> + kvm_x86_ops->set_debugctlmsr(vcpu,
> + DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
> + } else
> + vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
> + "0x%x data 0x%llx\n", msr, data);
> + break;
> + case MSR_LBR_SELECT:
> + case MSR_LBR_TOS:
> + case MSR_PENTIUM4_LER_FROM_LIP:
> + case MSR_PENTIUM4_LER_TO_LIP:
> + case MSR_PENTIUM4_LBR_TOS:
> + case MSR_IA32_LASTINTFROMIP:
> + case MSR_IA32_LASTINTTOIP:
> + case MSR_LBR_CORE2_FROM ... MSR_LBR_CORE2_FROM + 0x7:
> + case MSR_LBR_CORE2_TO ... MSR_LBR_CORE2_TO + 0x7:
> + case MSR_LBR_NHM_FROM ... MSR_LBR_NHM_FROM + 0x1f:
> + case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 0x1f:
> + if (kvm_x86_ops->set_lbr_msr)
> + kvm_x86_ops->set_lbr_msr(vcpu, msr, data);
> + else
> + vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
> + "0x%x data 0x%llx\n", msr, data);
I think you can just do this in kvm_x86_ops->set_msr. The old
implementation for DEBUGCTL MSR can be moved to svm.c.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/4] KVM: VMX: enable LBR virtualization
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
` (4 preceding siblings ...)
2015-11-09 1:33 ` [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
@ 2015-11-11 15:23 ` Paolo Bonzini
2015-11-12 8:06 ` Jian Zhou
5 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-11-11 15:23 UTC (permalink / raw)
To: Jian Zhou, kvm, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang, peter.huangpeng
On 23/10/2015 11:15, Jian Zhou wrote:
> Changelog in v2:
> (1) move the implementation into vmx.c
> (2) migraton is supported
> (3) add arrays in kvm_vcpu_arch struct to save/restore
> LBR MSRs at vm exit/entry time.
> (4) add a parameter of kvm_intel module to permanently
> disable LBRV
> (5) table of supported CPUs is reorgnized, LBRV
> can be enabled or not according to the guest CPUID
>
> Jian Zhou (4):
> KVM: X86: Add arrays to save/restore LBR MSRs
> KVM: X86: LBR MSRs of supported CPU types
> KVM: X86: Migration is supported
> KVM: VMX: details of LBR virtualization implementation
>
> arch/x86/include/asm/kvm_host.h | 26 ++++-
> arch/x86/include/asm/msr-index.h | 26 ++++-
> arch/x86/kvm/vmx.c | 245 +++++++++++++++++++++++++++++++++++++++
> arch/x86/kvm/x86.c | 88 ++++++++++++--
> 4 files changed, 366 insertions(+), 19 deletions(-)
Thanks, this looks better!
The reason why it took me so long to review it, is that I wanted to
understand what happens if you're running this on CPU model x but using
CPU model y for the guest. I still haven't grokked that fully, so I'll
apply your patches locally and play with them.
In the meanwhile, feel free to send v3 with: 1) the tweak I suggested to
patch 3; 2) the fix for the problem that the buildbot reported on patch 1.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 3/4] KVM: X86: Migration is supported
2015-11-11 15:15 ` Paolo Bonzini
@ 2015-11-12 7:06 ` Jian Zhou
2015-11-12 9:00 ` Paolo Bonzini
0 siblings, 1 reply; 16+ messages in thread
From: Jian Zhou @ 2015-11-12 7:06 UTC (permalink / raw)
To: Paolo Bonzini, kvm, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang, peter.huangpeng
On 2015/11/11 23:15, Paolo Bonzini wrote:
>
>
> On 23/10/2015 11:15, Jian Zhou wrote:
>> data *msr_info)
>> }
>> break;
>> case MSR_IA32_DEBUGCTLMSR:
>> - if (!data) {
>> - /* We support the non-activated case already */
>> - break;
>> - } else if (data & ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF)) {
>> - /* Values other than LBR and BTF are vendor-specific,
>> - thus reserved and should throw a #GP */
>> + supported = DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF |
>> + DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
>> +
>> + if (data & ~supported) {
>> + /*
>> + * Values other than LBR/BTF/FREEZE_LBRS_ON_PMI
>> + * are not supported, thus reserved and should throw a #GP
>> + */
>> + vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
>> + __func__, data);
>> return 1;
>> }
>> - vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
>> - __func__, data);
>> + if (kvm_x86_ops->set_debugctlmsr) {
>> + if (kvm_x86_ops->set_debugctlmsr(vcpu, data))
>> + return 1;
>> + }
>> + else
>> + return 1;
>> +
>> break;
>> case 0x200 ... 0x2ff:
>> return kvm_mtrr_set_msr(vcpu, msr, data);
>> @@ -2078,6 +2090,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>> vcpu_unimpl(vcpu, "disabled perfctr wrmsr: "
>> "0x%x data 0x%llx\n", msr, data);
>> break;
>> + case MSR_LBR_STATUS:
>> + if (kvm_x86_ops->set_debugctlmsr) {
>> + vcpu->arch.lbr_status = (data == 0) ? 0 : 1;
>> + if (data)
>> + kvm_x86_ops->set_debugctlmsr(vcpu,
>> + DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
>> + } else
>> + vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
>> + "0x%x data 0x%llx\n", msr, data);
>> + break;
>> + case MSR_LBR_SELECT:
>> + case MSR_LBR_TOS:
>> + case MSR_PENTIUM4_LER_FROM_LIP:
>> + case MSR_PENTIUM4_LER_TO_LIP:
>> + case MSR_PENTIUM4_LBR_TOS:
>> + case MSR_IA32_LASTINTFROMIP:
>> + case MSR_IA32_LASTINTTOIP:
>> + case MSR_LBR_CORE2_FROM ... MSR_LBR_CORE2_FROM + 0x7:
>> + case MSR_LBR_CORE2_TO ... MSR_LBR_CORE2_TO + 0x7:
>> + case MSR_LBR_NHM_FROM ... MSR_LBR_NHM_FROM + 0x1f:
>> + case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 0x1f:
>> + if (kvm_x86_ops->set_lbr_msr)
>> + kvm_x86_ops->set_lbr_msr(vcpu, msr, data);
>> + else
>> + vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
>> + "0x%x data 0x%llx\n", msr, data);
>
> I think you can just do this in kvm_x86_ops->set_msr. The old
> implementation for DEBUGCTL MSR can be moved to svm.c.
I think you mean "moved to vmx.c"?
Thanks,
Jian
> Paolo
>
> .
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/4] KVM: VMX: enable LBR virtualization
2015-11-11 15:23 ` Paolo Bonzini
@ 2015-11-12 8:06 ` Jian Zhou
0 siblings, 0 replies; 16+ messages in thread
From: Jian Zhou @ 2015-11-12 8:06 UTC (permalink / raw)
To: Paolo Bonzini, kvm, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang, peter.huangpeng
On 2015/11/11 23:23, Paolo Bonzini wrote:
>
>
> On 23/10/2015 11:15, Jian Zhou wrote:
>> Changelog in v2:
>> (1) move the implementation into vmx.c
>> (2) migraton is supported
>> (3) add arrays in kvm_vcpu_arch struct to save/restore
>> LBR MSRs at vm exit/entry time.
>> (4) add a parameter of kvm_intel module to permanently
>> disable LBRV
>> (5) table of supported CPUs is reorgnized, LBRV
>> can be enabled or not according to the guest CPUID
>>
>> Jian Zhou (4):
>> KVM: X86: Add arrays to save/restore LBR MSRs
>> KVM: X86: LBR MSRs of supported CPU types
>> KVM: X86: Migration is supported
>> KVM: VMX: details of LBR virtualization implementation
>>
>> arch/x86/include/asm/kvm_host.h | 26 ++++-
>> arch/x86/include/asm/msr-index.h | 26 ++++-
>> arch/x86/kvm/vmx.c | 245 +++++++++++++++++++++++++++++++++++++++
>> arch/x86/kvm/x86.c | 88 ++++++++++++--
>> 4 files changed, 366 insertions(+), 19 deletions(-)
>
> Thanks, this looks better!
>
> The reason why it took me so long to review it, is that I wanted to
> understand what happens if you're running this on CPU model x but using
> CPU model y for the guest. I still haven't grokked that fully, so I'll
> apply your patches locally and play with them.
Yes, that is a good question. I plan to write a kernel module in the
guest to read/write the MSR_IA32_DEBUGCTLMSR and MSRs of LBR
stack with host CPU model e.g. SandyBridge while using guest CPU
model e.g. core2duo for the guest. (The address of MSRs recording
last branch information between SandyBridge and core2duo is different)
> In the meanwhile, feel free to send v3 with: 1) the tweak I suggested to
> patch 3; 2) the fix for the problem that the buildbot reported on patch 1.
Okay, will fix them in v3.
Thanks,
Jian
> Paolo
>
> .
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 3/4] KVM: X86: Migration is supported
2015-11-12 7:06 ` Jian Zhou
@ 2015-11-12 9:00 ` Paolo Bonzini
2015-11-12 10:39 ` Jian Zhou
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-11-12 9:00 UTC (permalink / raw)
To: Jian Zhou, kvm, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang, peter.huangpeng
On 12/11/2015 08:06, Jian Zhou wrote:
>>
>> I think you can just do this in kvm_x86_ops->set_msr. The old
>> implementation for DEBUGCTL MSR can be moved to svm.c.
>
> I think you mean "moved to vmx.c"?
No, the old implementation is moved from x86.c to svm.c.
The new implementation you have in vmx.c is then called from vmx_set_msr.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 3/4] KVM: X86: Migration is supported
2015-11-12 9:00 ` Paolo Bonzini
@ 2015-11-12 10:39 ` Jian Zhou
0 siblings, 0 replies; 16+ messages in thread
From: Jian Zhou @ 2015-11-12 10:39 UTC (permalink / raw)
To: Paolo Bonzini, kvm, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang, peter.huangpeng
On 2015/11/12 17:00, Paolo Bonzini wrote:
>
>
> On 12/11/2015 08:06, Jian Zhou wrote:
>>>
>>> I think you can just do this in kvm_x86_ops->set_msr. The old
>>> implementation for DEBUGCTL MSR can be moved to svm.c.
>>
>> I think you mean "moved to vmx.c"?
>
> No, the old implementation is moved from x86.c to svm.c.
>
> The new implementation you have in vmx.c is then called from vmx_set_msr.
I got it, thanks.
Jian
> Paolo
>
> .
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 3/4] KVM: X86: Migration is supported
2015-10-23 8:46 Jian Zhou
@ 2015-10-23 8:47 ` Jian Zhou
0 siblings, 0 replies; 16+ messages in thread
From: Jian Zhou @ 2015-10-23 8:47 UTC (permalink / raw)
To: kvm, pbonzini, gleb, tglx, mingo, hpa, x86, linux-kernel
Cc: herongguang.he, zhang.zhanghailiang, weidong.huang,
peter.huangpeng, Jian Zhou
Supported bits of MSR_IA32_DEBUGCTLMSR are DEBUGCTLMSR_LBR(bit 0),
DEBUGCTLMSR_BTF(bit 1) and DEBUGCTLMSR_FREEZE_LBRS_ON_PMI(bit 11).
Qemu can get/set contents of LBR MSRs and LBR status in order to
support migration.
Signed-off-by: Jian Zhou <jianjay.zhou@huawei.com>
Signed-off-by: Stephen He <herongguang.he@huawei.com>
---
arch/x86/kvm/x86.c | 88 +++++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 77 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9a9a198..a3c72db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -136,6 +136,8 @@ struct kvm_shared_msrs {
static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
static struct kvm_shared_msrs __percpu *shared_msrs;
+#define MSR_LBR_STATUS 0xd6
+
struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "pf_fixed", VCPU_STAT(pf_fixed) },
{ "pf_guest", VCPU_STAT(pf_guest) },
@@ -1917,6 +1919,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
bool pr = false;
u32 msr = msr_info->index;
u64 data = msr_info->data;
+ u64 supported = 0;
switch (msr) {
case MSR_AMD64_NB_CFG:
@@ -1948,16 +1951,25 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
}
break;
case MSR_IA32_DEBUGCTLMSR:
- if (!data) {
- /* We support the non-activated case already */
- break;
- } else if (data & ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF)) {
- /* Values other than LBR and BTF are vendor-specific,
- thus reserved and should throw a #GP */
+ supported = DEBUGCTLMSR_LBR | DEBUGCTLMSR_BTF |
+ DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
+
+ if (data & ~supported) {
+ /*
+ * Values other than LBR/BTF/FREEZE_LBRS_ON_PMI
+ * are not supported, thus reserved and should throw a #GP
+ */
+ vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
+ __func__, data);
return 1;
}
- vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTLMSR 0x%llx, nop\n",
- __func__, data);
+ if (kvm_x86_ops->set_debugctlmsr) {
+ if (kvm_x86_ops->set_debugctlmsr(vcpu, data))
+ return 1;
+ }
+ else
+ return 1;
+
break;
case 0x200 ... 0x2ff:
return kvm_mtrr_set_msr(vcpu, msr, data);
@@ -2078,6 +2090,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
vcpu_unimpl(vcpu, "disabled perfctr wrmsr: "
"0x%x data 0x%llx\n", msr, data);
break;
+ case MSR_LBR_STATUS:
+ if (kvm_x86_ops->set_debugctlmsr) {
+ vcpu->arch.lbr_status = (data == 0) ? 0 : 1;
+ if (data)
+ kvm_x86_ops->set_debugctlmsr(vcpu,
+ DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+ } else
+ vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
+ "0x%x data 0x%llx\n", msr, data);
+ break;
+ case MSR_LBR_SELECT:
+ case MSR_LBR_TOS:
+ case MSR_PENTIUM4_LER_FROM_LIP:
+ case MSR_PENTIUM4_LER_TO_LIP:
+ case MSR_PENTIUM4_LBR_TOS:
+ case MSR_IA32_LASTINTFROMIP:
+ case MSR_IA32_LASTINTTOIP:
+ case MSR_LBR_CORE2_FROM ... MSR_LBR_CORE2_FROM + 0x7:
+ case MSR_LBR_CORE2_TO ... MSR_LBR_CORE2_TO + 0x7:
+ case MSR_LBR_NHM_FROM ... MSR_LBR_NHM_FROM + 0x1f:
+ case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 0x1f:
+ if (kvm_x86_ops->set_lbr_msr)
+ kvm_x86_ops->set_lbr_msr(vcpu, msr, data);
+ else
+ vcpu_unimpl(vcpu, "lbr is disabled, ignored wrmsr: "
+ "0x%x data 0x%llx\n", msr, data);
+ break;
case MSR_K7_CLK_CTL:
/*
* Ignore all writes to this no longer documented MSR.
@@ -2178,13 +2217,16 @@ static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
switch (msr_info->index) {
+ case MSR_IA32_DEBUGCTLMSR:
+ if (kvm_x86_ops->get_debugctlmsr)
+ msr_info->data = kvm_x86_ops->get_debugctlmsr();
+ else
+ msr_info->data = 0;
+ break;
case MSR_IA32_PLATFORM_ID:
case MSR_IA32_EBL_CR_POWERON:
- case MSR_IA32_DEBUGCTLMSR:
case MSR_IA32_LASTBRANCHFROMIP:
case MSR_IA32_LASTBRANCHTOIP:
- case MSR_IA32_LASTINTFROMIP:
- case MSR_IA32_LASTINTTOIP:
case MSR_K8_SYSCFG:
case MSR_K8_TSEG_ADDR:
case MSR_K8_TSEG_MASK:
@@ -2204,6 +2246,26 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return kvm_pmu_get_msr(vcpu, msr_info->index, &msr_info->data);
msr_info->data = 0;
break;
+ case MSR_LBR_STATUS:
+ msr_info->data = vcpu->arch.lbr_status;
+ break;
+ case MSR_LBR_SELECT:
+ case MSR_LBR_TOS:
+ case MSR_PENTIUM4_LER_FROM_LIP:
+ case MSR_PENTIUM4_LER_TO_LIP:
+ case MSR_PENTIUM4_LBR_TOS:
+ case MSR_IA32_LASTINTFROMIP:
+ case MSR_IA32_LASTINTTOIP:
+ case MSR_LBR_CORE2_FROM ... MSR_LBR_CORE2_FROM + 0x7:
+ case MSR_LBR_CORE2_TO ... MSR_LBR_CORE2_TO + 0x7:
+ case MSR_LBR_SKYLAKE_FROM ... MSR_LBR_SKYLAKE_FROM + 0x1f:
+ case MSR_LBR_SKYLAKE_TO ... MSR_LBR_SKYLAKE_TO + 0x1f:
+ if (kvm_x86_ops->get_lbr_msr)
+ msr_info->data = kvm_x86_ops->get_lbr_msr(vcpu,
+ msr_info->index);
+ else
+ msr_info->data = 0;
+ break;
case MSR_IA32_UCODE_REV:
msr_info->data = 0x100000000ULL;
break;
@@ -7376,6 +7438,10 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
kvm_async_pf_hash_reset(vcpu);
kvm_pmu_init(vcpu);
+ vcpu->arch.lbr_status = 0;
+ vcpu->arch.lbr_used = 0;
+ vcpu->arch.lbr_msr.nr = 0;
+
return 0;
fail_free_mce_banks:
--
1.7.12.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
end of thread, other threads:[~2015-11-12 10:40 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-23 9:15 [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
2015-10-23 9:15 ` [PATCH v2 1/4] KVM: X86: Add arrays to save/restore LBR MSRs Jian Zhou
2015-10-23 12:14 ` kbuild test robot
2015-10-23 9:15 ` [PATCH v2 2/4] KVM: X86: LBR MSRs of supported CPU types Jian Zhou
2015-10-23 9:15 ` [PATCH v2 3/4] KVM: X86: Migration is supported Jian Zhou
2015-11-11 15:15 ` Paolo Bonzini
2015-11-12 7:06 ` Jian Zhou
2015-11-12 9:00 ` Paolo Bonzini
2015-11-12 10:39 ` Jian Zhou
2015-10-23 9:15 ` [PATCH v2 4/4] KVM: VMX: details of LBR virtualization implementation Jian Zhou
2015-11-09 1:33 ` [PATCH v2 0/4] KVM: VMX: enable LBR virtualization Jian Zhou
2015-11-09 9:06 ` Paolo Bonzini
2015-11-09 9:26 ` Jian Zhou
2015-11-11 15:23 ` Paolo Bonzini
2015-11-12 8:06 ` Jian Zhou
-- strict thread matches above, loose matches on Subject: below --
2015-10-23 8:46 Jian Zhou
2015-10-23 8:47 ` [PATCH v2 3/4] KVM: X86: Migration is supported Jian Zhou
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.