* [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC
@ 2019-11-08 5:14 Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Aaron Lewis @ 2019-11-08 5:14 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson, Paolo Bonzini, Aaron Lewis
The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
TSC value that might have been observed by L2 prior to VM-exit. The
current implementation does not capture a very tight bound on this
value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
during the emulation of an L2->L1 VM-exit, special-case the
IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
VM-exit MSR-store area to derive the value to be stored in the vmcs12
VM-exit MSR-store area.
v3 -> v4:
- Squash the final commit with the previous one used to prepare the MSR-store
area. There is no need for this split after all.
v2 -> v3:
- Rename NR_MSR_ENTRIES to NR_LOADSAVE_MSRS
- Pull setup code for preparing the MSR-store area out of the final commit and
put it in it's own commit (4/5).
- Export vmx_find_msr_index() in the final commit instead of in commit 3/5 as
it isn't until the final commit that we actually use it.
v1 -> v2:
- Rename function nested_vmx_get_msr_value() to
nested_vmx_get_vmexit_msr_value().
- Remove unneeded tag 'Change-Id' from commit messages.
Aaron Lewis (4):
kvm: nested: Introduce read_and_check_msr_entry()
kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS
kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
KVM: nVMX: Add support for capturing highest observable L2 TSC
arch/x86/kvm/vmx/nested.c | 136 ++++++++++++++++++++++++++++++++------
arch/x86/kvm/vmx/vmx.c | 14 ++--
arch/x86/kvm/vmx/vmx.h | 9 ++-
3 files changed, 131 insertions(+), 28 deletions(-)
--
2.24.0.432.g9d3f5f5b63-goog
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 1/4] kvm: nested: Introduce read_and_check_msr_entry()
2019-11-08 5:14 [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
@ 2019-11-08 5:14 ` Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS Aaron Lewis
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2019-11-08 5:14 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson, Paolo Bonzini, Aaron Lewis, Liran Alon
Add the function read_and_check_msr_entry() which just pulls some code
out of nested_vmx_store_msr(). This will be useful as reusable code in
upcoming patches.
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/vmx/nested.c | 35 ++++++++++++++++++++++-------------
1 file changed, 22 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index e76eb4f07f6c..7b058d7b9fcc 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -929,6 +929,26 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
return i + 1;
}
+static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i,
+ struct vmx_msr_entry *e)
+{
+ if (kvm_vcpu_read_guest(vcpu,
+ gpa + i * sizeof(*e),
+ e, 2 * sizeof(u32))) {
+ pr_debug_ratelimited(
+ "%s cannot read MSR entry (%u, 0x%08llx)\n",
+ __func__, i, gpa + i * sizeof(*e));
+ return false;
+ }
+ if (nested_vmx_store_msr_check(vcpu, e)) {
+ pr_debug_ratelimited(
+ "%s check failed (%u, 0x%x, 0x%x)\n",
+ __func__, i, e->index, e->reserved);
+ return false;
+ }
+ return true;
+}
+
static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
{
u64 data;
@@ -940,20 +960,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
if (unlikely(i >= max_msr_list_size))
return -EINVAL;
- if (kvm_vcpu_read_guest(vcpu,
- gpa + i * sizeof(e),
- &e, 2 * sizeof(u32))) {
- pr_debug_ratelimited(
- "%s cannot read MSR entry (%u, 0x%08llx)\n",
- __func__, i, gpa + i * sizeof(e));
+ if (!read_and_check_msr_entry(vcpu, gpa, i, &e))
return -EINVAL;
- }
- if (nested_vmx_store_msr_check(vcpu, &e)) {
- pr_debug_ratelimited(
- "%s check failed (%u, 0x%x, 0x%x)\n",
- __func__, i, e.index, e.reserved);
- return -EINVAL;
- }
+
if (kvm_get_msr(vcpu, e.index, &data)) {
pr_debug_ratelimited(
"%s cannot read MSR (%u, 0x%x)\n",
--
2.24.0.432.g9d3f5f5b63-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS
2019-11-08 5:14 [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
@ 2019-11-08 5:14 ` Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2019-11-08 5:14 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson, Paolo Bonzini, Aaron Lewis
Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS. This needs to be done
due to the addition of the MSR-autostore area that will be added in a
future patch. After that the name AUTOLOAD will no longer make sense.
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/vmx/vmx.c | 4 ++--
arch/x86/kvm/vmx/vmx.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e7970a2e8eae..d9afafc0e399 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -940,8 +940,8 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
if (!entry_only)
j = find_msr(&m->host, msr);
- if ((i < 0 && m->guest.nr == NR_AUTOLOAD_MSRS) ||
- (j < 0 && m->host.nr == NR_AUTOLOAD_MSRS)) {
+ if ((i < 0 && m->guest.nr == NR_LOADSTORE_MSRS) ||
+ (j < 0 && m->host.nr == NR_LOADSTORE_MSRS)) {
printk_once(KERN_WARNING "Not enough msr switch entries. "
"Can't add msr %x\n", msr);
return;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index bee16687dc0b..1dad8e5c8f86 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -22,11 +22,11 @@ extern u32 get_umwait_control_msr(void);
#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
-#define NR_AUTOLOAD_MSRS 8
+#define NR_LOADSTORE_MSRS 8
struct vmx_msrs {
unsigned int nr;
- struct vmx_msr_entry val[NR_AUTOLOAD_MSRS];
+ struct vmx_msr_entry val[NR_LOADSTORE_MSRS];
};
struct shared_msr_entry {
--
2.24.0.432.g9d3f5f5b63-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
2019-11-08 5:14 [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS Aaron Lewis
@ 2019-11-08 5:14 ` Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
2019-11-15 10:23 ` [PATCH v4 0/4] Add support for capturing the " Paolo Bonzini
4 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2019-11-08 5:14 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson, Paolo Bonzini, Aaron Lewis
Rename function find_msr() to vmx_find_msr_index() in preparation for an
upcoming patch where we export it and use it in nested.c.
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/vmx/vmx.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d9afafc0e399..3fa836a5569a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -835,7 +835,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
vm_exit_controls_clearbit(vmx, exit);
}
-static int find_msr(struct vmx_msrs *m, unsigned int msr)
+static int vmx_find_msr_index(struct vmx_msrs *m, u32 msr)
{
unsigned int i;
@@ -869,7 +869,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
}
break;
}
- i = find_msr(&m->guest, msr);
+ i = vmx_find_msr_index(&m->guest, msr);
if (i < 0)
goto skip_guest;
--m->guest.nr;
@@ -877,7 +877,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
skip_guest:
- i = find_msr(&m->host, msr);
+ i = vmx_find_msr_index(&m->host, msr);
if (i < 0)
return;
@@ -936,9 +936,9 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
}
- i = find_msr(&m->guest, msr);
+ i = vmx_find_msr_index(&m->guest, msr);
if (!entry_only)
- j = find_msr(&m->host, msr);
+ j = vmx_find_msr_index(&m->host, msr);
if ((i < 0 && m->guest.nr == NR_LOADSTORE_MSRS) ||
(j < 0 && m->host.nr == NR_LOADSTORE_MSRS)) {
--
2.24.0.432.g9d3f5f5b63-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC
2019-11-08 5:14 [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
` (2 preceding siblings ...)
2019-11-08 5:14 ` [PATCH v4 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
@ 2019-11-08 5:14 ` Aaron Lewis
2019-11-15 10:23 ` [PATCH v4 0/4] Add support for capturing the " Paolo Bonzini
4 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2019-11-08 5:14 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson, Paolo Bonzini, Aaron Lewis, Liran Alon
The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
TSC value that might have been observed by L2 prior to VM-exit. The
current implementation does not capture a very tight bound on this
value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
during the emulation of an L2->L1 VM-exit, special-case the
IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
VM-exit MSR-store area to derive the value to be stored in the vmcs12
VM-exit MSR-store area.
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/vmx/nested.c | 101 +++++++++++++++++++++++++++++++++++---
arch/x86/kvm/vmx/vmx.c | 2 +-
arch/x86/kvm/vmx/vmx.h | 5 ++
3 files changed, 101 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 7b058d7b9fcc..2055f15cb007 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -929,6 +929,37 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
return i + 1;
}
+static bool nested_vmx_get_vmexit_msr_value(struct kvm_vcpu *vcpu,
+ u32 msr_index,
+ u64 *data)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+ /*
+ * If the L0 hypervisor stored a more accurate value for the TSC that
+ * does not include the time taken for emulation of the L2->L1
+ * VM-exit in L0, use the more accurate value.
+ */
+ if (msr_index == MSR_IA32_TSC) {
+ int index = vmx_find_msr_index(&vmx->msr_autostore.guest,
+ MSR_IA32_TSC);
+
+ if (index >= 0) {
+ u64 val = vmx->msr_autostore.guest.val[index].value;
+
+ *data = kvm_read_l1_tsc(vcpu, val);
+ return true;
+ }
+ }
+
+ if (kvm_get_msr(vcpu, msr_index, data)) {
+ pr_debug_ratelimited("%s cannot read MSR (0x%x)\n", __func__,
+ msr_index);
+ return false;
+ }
+ return true;
+}
+
static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i,
struct vmx_msr_entry *e)
{
@@ -963,12 +994,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
if (!read_and_check_msr_entry(vcpu, gpa, i, &e))
return -EINVAL;
- if (kvm_get_msr(vcpu, e.index, &data)) {
- pr_debug_ratelimited(
- "%s cannot read MSR (%u, 0x%x)\n",
- __func__, i, e.index);
+ if (!nested_vmx_get_vmexit_msr_value(vcpu, e.index, &data))
return -EINVAL;
- }
+
if (kvm_vcpu_write_guest(vcpu,
gpa + i * sizeof(e) +
offsetof(struct vmx_msr_entry, value),
@@ -982,6 +1010,60 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
return 0;
}
+static bool nested_msr_store_list_has_msr(struct kvm_vcpu *vcpu, u32 msr_index)
+{
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ u32 count = vmcs12->vm_exit_msr_store_count;
+ u64 gpa = vmcs12->vm_exit_msr_store_addr;
+ struct vmx_msr_entry e;
+ u32 i;
+
+ for (i = 0; i < count; i++) {
+ if (!read_and_check_msr_entry(vcpu, gpa, i, &e))
+ return false;
+
+ if (e.index == msr_index)
+ return true;
+ }
+ return false;
+}
+
+static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu,
+ u32 msr_index)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ struct vmx_msrs *autostore = &vmx->msr_autostore.guest;
+ bool in_vmcs12_store_list;
+ int msr_autostore_index;
+ bool in_autostore_list;
+ int last;
+
+ msr_autostore_index = vmx_find_msr_index(autostore, msr_index);
+ in_autostore_list = msr_autostore_index >= 0;
+ in_vmcs12_store_list = nested_msr_store_list_has_msr(vcpu, msr_index);
+
+ if (in_vmcs12_store_list && !in_autostore_list) {
+ if (autostore->nr == NR_LOADSTORE_MSRS) {
+ /*
+ * Emulated VMEntry does not fail here. Instead a less
+ * accurate value will be returned by
+ * nested_vmx_get_vmexit_msr_value() using kvm_get_msr()
+ * instead of reading the value from the vmcs02 VMExit
+ * MSR-store area.
+ */
+ pr_warn_ratelimited(
+ "Not enough msr entries in msr_autostore. Can't add msr %x\n",
+ msr_index);
+ return;
+ }
+ last = autostore->nr++;
+ autostore->val[last].index = msr_index;
+ } else if (!in_vmcs12_store_list && in_autostore_list) {
+ last = --autostore->nr;
+ autostore->val[msr_autostore_index] = autostore->val[last];
+ }
+}
+
static bool nested_cr3_valid(struct kvm_vcpu *vcpu, unsigned long val)
{
unsigned long invalid_mask;
@@ -2027,7 +2109,7 @@ static void prepare_vmcs02_constant_state(struct vcpu_vmx *vmx)
* addresses are constant (for vmcs02), the counts can change based
* on L2's behavior, e.g. switching to/from long mode.
*/
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+ vmcs_write64(VM_EXIT_MSR_STORE_ADDR, __pa(vmx->msr_autostore.guest.val));
vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
@@ -2294,6 +2376,13 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
vmcs_write64(EOI_EXIT_BITMAP3, vmcs12->eoi_exit_bitmap3);
}
+ /*
+ * Make sure the msr_autostore list is up to date before we set the
+ * count in the vmcs02.
+ */
+ prepare_vmx_msr_autostore_list(&vmx->vcpu, MSR_IA32_TSC);
+
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, vmx->msr_autostore.guest.nr);
vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3fa836a5569a..a7fd70db0b1e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -835,7 +835,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
vm_exit_controls_clearbit(vmx, exit);
}
-static int vmx_find_msr_index(struct vmx_msrs *m, u32 msr)
+int vmx_find_msr_index(struct vmx_msrs *m, u32 msr)
{
unsigned int i;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 1dad8e5c8f86..673330e528e8 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -230,6 +230,10 @@ struct vcpu_vmx {
struct vmx_msrs host;
} msr_autoload;
+ struct msr_autostore {
+ struct vmx_msrs guest;
+ } msr_autostore;
+
struct {
int vm86_active;
ulong save_rflags;
@@ -334,6 +338,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr);
void pt_update_intercept_for_msr(struct vcpu_vmx *vmx);
void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
+int vmx_find_msr_index(struct vmx_msrs *m, u32 msr);
#define POSTED_INTR_ON 0
#define POSTED_INTR_SN 1
--
2.24.0.432.g9d3f5f5b63-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC
2019-11-08 5:14 [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
` (3 preceding siblings ...)
2019-11-08 5:14 ` [PATCH v4 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
@ 2019-11-15 10:23 ` Paolo Bonzini
2019-11-15 14:25 ` Aaron Lewis
4 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2019-11-15 10:23 UTC (permalink / raw)
To: Aaron Lewis, kvm; +Cc: Jim Mattson
On 08/11/19 06:14, Aaron Lewis wrote:
> The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
> vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
> TSC value that might have been observed by L2 prior to VM-exit. The
> current implementation does not capture a very tight bound on this
> value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
> vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
> MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
> during the emulation of an L2->L1 VM-exit, special-case the
> IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
> VM-exit MSR-store area to derive the value to be stored in the vmcs12
> VM-exit MSR-store area.
>
> v3 -> v4:
> - Squash the final commit with the previous one used to prepare the MSR-store
> area. There is no need for this split after all.
>
> v2 -> v3:
> - Rename NR_MSR_ENTRIES to NR_LOADSAVE_MSRS
> - Pull setup code for preparing the MSR-store area out of the final commit and
> put it in it's own commit (4/5).
> - Export vmx_find_msr_index() in the final commit instead of in commit 3/5 as
> it isn't until the final commit that we actually use it.
>
> v1 -> v2:
> - Rename function nested_vmx_get_msr_value() to
> nested_vmx_get_vmexit_msr_value().
> - Remove unneeded tag 'Change-Id' from commit messages.
>
> Aaron Lewis (4):
> kvm: nested: Introduce read_and_check_msr_entry()
> kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS
> kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
> KVM: nVMX: Add support for capturing highest observable L2 TSC
>
> arch/x86/kvm/vmx/nested.c | 136 ++++++++++++++++++++++++++++++++------
> arch/x86/kvm/vmx/vmx.c | 14 ++--
> arch/x86/kvm/vmx/vmx.h | 9 ++-
> 3 files changed, 131 insertions(+), 28 deletions(-)
>
Queued, but it would be good to have a testcase for this, either for
kvm-unit-tests or for tools/testing/selftests/kvm.
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC
2019-11-15 10:23 ` [PATCH v4 0/4] Add support for capturing the " Paolo Bonzini
@ 2019-11-15 14:25 ` Aaron Lewis
2019-11-15 14:26 ` Paolo Bonzini
0 siblings, 1 reply; 8+ messages in thread
From: Aaron Lewis @ 2019-11-15 14:25 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, Jim Mattson
On Fri, Nov 15, 2019 at 2:23 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 08/11/19 06:14, Aaron Lewis wrote:
> > The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
> > vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
> > TSC value that might have been observed by L2 prior to VM-exit. The
> > current implementation does not capture a very tight bound on this
> > value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
> > vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
> > MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
> > during the emulation of an L2->L1 VM-exit, special-case the
> > IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
> > VM-exit MSR-store area to derive the value to be stored in the vmcs12
> > VM-exit MSR-store area.
> >
> > v3 -> v4:
> > - Squash the final commit with the previous one used to prepare the MSR-store
> > area. There is no need for this split after all.
> >
> > v2 -> v3:
> > - Rename NR_MSR_ENTRIES to NR_LOADSAVE_MSRS
> > - Pull setup code for preparing the MSR-store area out of the final commit and
> > put it in it's own commit (4/5).
> > - Export vmx_find_msr_index() in the final commit instead of in commit 3/5 as
> > it isn't until the final commit that we actually use it.
> >
> > v1 -> v2:
> > - Rename function nested_vmx_get_msr_value() to
> > nested_vmx_get_vmexit_msr_value().
> > - Remove unneeded tag 'Change-Id' from commit messages.
> >
> > Aaron Lewis (4):
> > kvm: nested: Introduce read_and_check_msr_entry()
> > kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS
> > kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
> > KVM: nVMX: Add support for capturing highest observable L2 TSC
> >
> > arch/x86/kvm/vmx/nested.c | 136 ++++++++++++++++++++++++++++++++------
> > arch/x86/kvm/vmx/vmx.c | 14 ++--
> > arch/x86/kvm/vmx/vmx.h | 9 ++-
> > 3 files changed, 131 insertions(+), 28 deletions(-)
> >
>
> Queued, but it would be good to have a testcase for this, either for
> kvm-unit-tests or for tools/testing/selftests/kvm.
>
> Paolo
>
Agreed. I have some test cases in kvm-unit-tests for this code that
I've been using to test these changes locally, however, they would
fail upstream without "[kvm-unit-tests PATCH] x86: Fix the register
order to match struct regs" being taken first. I'll ping that patch
again.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC
2019-11-15 14:25 ` Aaron Lewis
@ 2019-11-15 14:26 ` Paolo Bonzini
0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2019-11-15 14:26 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, Jim Mattson
On 15/11/19 15:25, Aaron Lewis wrote:
> Agreed. I have some test cases in kvm-unit-tests for this code that
> I've been using to test these changes locally, however, they would
> fail upstream without "[kvm-unit-tests PATCH] x86: Fix the register
> order to match struct regs" being taken first. I'll ping that patch
> again.
>
You've just done that... :)
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-11-15 14:26 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-08 5:14 [PATCH v4 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
2019-11-08 5:14 ` [PATCH v4 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
2019-11-15 10:23 ` [PATCH v4 0/4] Add support for capturing the " Paolo Bonzini
2019-11-15 14:25 ` Aaron Lewis
2019-11-15 14:26 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.