* [PATCH 0/4] Add support for capturing the highest observable L2 TSC
@ 2019-10-29 21:05 Aaron Lewis
2019-10-29 21:05 ` [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Aaron Lewis @ 2019-10-29 21:05 UTC (permalink / raw)
To: kvm; +Cc: Aaron Lewis
The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
TSC value that might have been observed by L2 prior to VM-exit. The
current implementation does not capture a very tight bound on this
value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
during the emulation of an L2->L1 VM-exit, special-case the
IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
VM-exit MSR-store area to derive the value to be stored in the vmcs12
VM-exit MSR-store area.
Aaron Lewis (4):
kvm: nested: Introduce read_and_check_msr_entry()
kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES
kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
KVM: nVMX: Add support for capturing highest observable L2 TSC
arch/x86/kvm/vmx/nested.c | 126 ++++++++++++++++++++++++++++++++------
arch/x86/kvm/vmx/vmx.c | 14 ++---
arch/x86/kvm/vmx/vmx.h | 9 ++-
3 files changed, 121 insertions(+), 28 deletions(-)
--
2.24.0.rc0.303.g954a862665-goog
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry()
2019-10-29 21:05 [PATCH 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
@ 2019-10-29 21:05 ` Aaron Lewis
2019-11-04 22:41 ` Jim Mattson
2019-10-29 21:05 ` [PATCH 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES Aaron Lewis
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Aaron Lewis @ 2019-10-29 21:05 UTC (permalink / raw)
To: kvm; +Cc: Aaron Lewis
Add the function read_and_check_msr_entry() which just pulls some code
out of nested_vmx_store_msr() for now, however, this is in preparation
for a change later in this series were we reuse the code in
read_and_check_msr_entry().
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Iaf8787198c06674e8b0555982a962f5bd288e43f
---
arch/x86/kvm/vmx/nested.c | 35 ++++++++++++++++++++++-------------
1 file changed, 22 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index e76eb4f07f6c..7b058d7b9fcc 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -929,6 +929,26 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
return i + 1;
}
+static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i,
+ struct vmx_msr_entry *e)
+{
+ if (kvm_vcpu_read_guest(vcpu,
+ gpa + i * sizeof(*e),
+ e, 2 * sizeof(u32))) {
+ pr_debug_ratelimited(
+ "%s cannot read MSR entry (%u, 0x%08llx)\n",
+ __func__, i, gpa + i * sizeof(*e));
+ return false;
+ }
+ if (nested_vmx_store_msr_check(vcpu, e)) {
+ pr_debug_ratelimited(
+ "%s check failed (%u, 0x%x, 0x%x)\n",
+ __func__, i, e->index, e->reserved);
+ return false;
+ }
+ return true;
+}
+
static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
{
u64 data;
@@ -940,20 +960,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
if (unlikely(i >= max_msr_list_size))
return -EINVAL;
- if (kvm_vcpu_read_guest(vcpu,
- gpa + i * sizeof(e),
- &e, 2 * sizeof(u32))) {
- pr_debug_ratelimited(
- "%s cannot read MSR entry (%u, 0x%08llx)\n",
- __func__, i, gpa + i * sizeof(e));
+ if (!read_and_check_msr_entry(vcpu, gpa, i, &e))
return -EINVAL;
- }
- if (nested_vmx_store_msr_check(vcpu, &e)) {
- pr_debug_ratelimited(
- "%s check failed (%u, 0x%x, 0x%x)\n",
- __func__, i, e.index, e.reserved);
- return -EINVAL;
- }
+
if (kvm_get_msr(vcpu, e.index, &data)) {
pr_debug_ratelimited(
"%s cannot read MSR (%u, 0x%x)\n",
--
2.24.0.rc0.303.g954a862665-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES
2019-10-29 21:05 [PATCH 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
2019-10-29 21:05 ` [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
@ 2019-10-29 21:05 ` Aaron Lewis
2019-11-04 22:42 ` Jim Mattson
2019-10-29 21:05 ` [PATCH 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
2019-10-29 21:05 ` [PATCH 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
3 siblings, 1 reply; 10+ messages in thread
From: Aaron Lewis @ 2019-10-29 21:05 UTC (permalink / raw)
To: kvm; +Cc: Aaron Lewis
Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES. This needs to be done
due to the addition of the MSR-autostore area that will be added later
in this series. After that the name AUTOLOAD will no longer make sense.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Iafe7c3bfb90842a93d7c453a1d8c84a48d5fe7b0
---
arch/x86/kvm/vmx/vmx.c | 4 ++--
arch/x86/kvm/vmx/vmx.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e7970a2e8eae..c0160ca9ddba 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -940,8 +940,8 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
if (!entry_only)
j = find_msr(&m->host, msr);
- if ((i < 0 && m->guest.nr == NR_AUTOLOAD_MSRS) ||
- (j < 0 && m->host.nr == NR_AUTOLOAD_MSRS)) {
+ if ((i < 0 && m->guest.nr == NR_MSR_ENTRIES) ||
+ (j < 0 && m->host.nr == NR_MSR_ENTRIES)) {
printk_once(KERN_WARNING "Not enough msr switch entries. "
"Can't add msr %x\n", msr);
return;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index bee16687dc0b..0c6835bd6945 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -22,11 +22,11 @@ extern u32 get_umwait_control_msr(void);
#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
-#define NR_AUTOLOAD_MSRS 8
+#define NR_MSR_ENTRIES 8
struct vmx_msrs {
unsigned int nr;
- struct vmx_msr_entry val[NR_AUTOLOAD_MSRS];
+ struct vmx_msr_entry val[NR_MSR_ENTRIES];
};
struct shared_msr_entry {
--
2.24.0.rc0.303.g954a862665-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
2019-10-29 21:05 [PATCH 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
2019-10-29 21:05 ` [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
2019-10-29 21:05 ` [PATCH 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES Aaron Lewis
@ 2019-10-29 21:05 ` Aaron Lewis
2019-11-04 22:44 ` Jim Mattson
2019-10-29 21:05 ` [PATCH 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
3 siblings, 1 reply; 10+ messages in thread
From: Aaron Lewis @ 2019-10-29 21:05 UTC (permalink / raw)
To: kvm; +Cc: Aaron Lewis
Rename function find_msr() to vmx_find_msr_index() to share
implementations between vmx.c and nested.c in an upcoming change.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I544afb762ceccb72e950bb62ac2e649a8b0cfec9
---
arch/x86/kvm/vmx/vmx.c | 10 +++++-----
arch/x86/kvm/vmx/vmx.h | 1 +
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c0160ca9ddba..39c701730297 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -835,7 +835,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
vm_exit_controls_clearbit(vmx, exit);
}
-static int find_msr(struct vmx_msrs *m, unsigned int msr)
+int vmx_find_msr_index(struct vmx_msrs *m, u32 msr)
{
unsigned int i;
@@ -869,7 +869,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
}
break;
}
- i = find_msr(&m->guest, msr);
+ i = vmx_find_msr_index(&m->guest, msr);
if (i < 0)
goto skip_guest;
--m->guest.nr;
@@ -877,7 +877,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
skip_guest:
- i = find_msr(&m->host, msr);
+ i = vmx_find_msr_index(&m->host, msr);
if (i < 0)
return;
@@ -936,9 +936,9 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
}
- i = find_msr(&m->guest, msr);
+ i = vmx_find_msr_index(&m->guest, msr);
if (!entry_only)
- j = find_msr(&m->host, msr);
+ j = vmx_find_msr_index(&m->host, msr);
if ((i < 0 && m->guest.nr == NR_MSR_ENTRIES) ||
(j < 0 && m->host.nr == NR_MSR_ENTRIES)) {
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 0c6835bd6945..34b5fef603d8 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -334,6 +334,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr);
void pt_update_intercept_for_msr(struct vcpu_vmx *vmx);
void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
+int vmx_find_msr_index(struct vmx_msrs *m, u32 msr);
#define POSTED_INTR_ON 0
#define POSTED_INTR_SN 1
--
2.24.0.rc0.303.g954a862665-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC
2019-10-29 21:05 [PATCH 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
` (2 preceding siblings ...)
2019-10-29 21:05 ` [PATCH 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
@ 2019-10-29 21:05 ` Aaron Lewis
2019-11-04 22:48 ` Jim Mattson
3 siblings, 1 reply; 10+ messages in thread
From: Aaron Lewis @ 2019-10-29 21:05 UTC (permalink / raw)
To: kvm; +Cc: Aaron Lewis
The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
TSC value that might have been observed by L2 prior to VM-exit. The
current implementation does not capture a very tight bound on this
value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
during the emulation of an L2->L1 VM-exit, special-case the
IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
VM-exit MSR-store area to derive the value to be stored in the vmcs12
VM-exit MSR-store area.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I876e79d98e8f2e5439deafb070be1daa2e1a8e4a
---
arch/x86/kvm/vmx/nested.c | 91 ++++++++++++++++++++++++++++++++++++---
arch/x86/kvm/vmx/vmx.h | 4 ++
2 files changed, 89 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 7b058d7b9fcc..19863f2a6588 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -929,6 +929,36 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
return i + 1;
}
+static bool nested_vmx_get_msr_value(struct kvm_vcpu *vcpu, u32 msr_index,
+ u64 *data)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+ /*
+ * If the L0 hypervisor stored a more accurate value for the TSC that
+ * does not include the time taken for emulation of the L2->L1
+ * VM-exit in L0, use the more accurate value.
+ */
+ if (msr_index == MSR_IA32_TSC) {
+ int index = vmx_find_msr_index(&vmx->msr_autostore.guest,
+ MSR_IA32_TSC);
+
+ if (index >= 0) {
+ u64 val = vmx->msr_autostore.guest.val[index].value;
+
+ *data = kvm_read_l1_tsc(vcpu, val);
+ return true;
+ }
+ }
+
+ if (kvm_get_msr(vcpu, msr_index, data)) {
+ pr_debug_ratelimited("%s cannot read MSR (0x%x)\n", __func__,
+ msr_index);
+ return false;
+ }
+ return true;
+}
+
static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i,
struct vmx_msr_entry *e)
{
@@ -963,12 +993,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
if (!read_and_check_msr_entry(vcpu, gpa, i, &e))
return -EINVAL;
- if (kvm_get_msr(vcpu, e.index, &data)) {
- pr_debug_ratelimited(
- "%s cannot read MSR (%u, 0x%x)\n",
- __func__, i, e.index);
+ if (!nested_vmx_get_msr_value(vcpu, e.index, &data))
return -EINVAL;
- }
+
if (kvm_vcpu_write_guest(vcpu,
gpa + i * sizeof(e) +
offsetof(struct vmx_msr_entry, value),
@@ -982,6 +1009,51 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
return 0;
}
+static bool nested_msr_store_list_has_msr(struct kvm_vcpu *vcpu, u32 msr_index)
+{
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+ u32 count = vmcs12->vm_exit_msr_store_count;
+ u64 gpa = vmcs12->vm_exit_msr_store_addr;
+ struct vmx_msr_entry e;
+ u32 i;
+
+ for (i = 0; i < count; i++) {
+ if (!read_and_check_msr_entry(vcpu, gpa, i, &e))
+ return false;
+
+ if (e.index == msr_index)
+ return true;
+ }
+ return false;
+}
+
+static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu,
+ u32 msr_index)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ struct vmx_msrs *autostore = &vmx->msr_autostore.guest;
+ int i = vmx_find_msr_index(autostore, msr_index);
+ bool in_autostore_list = i >= 0;
+ bool in_vmcs12_store_list;
+ int last;
+
+ in_vmcs12_store_list = nested_msr_store_list_has_msr(vcpu, msr_index);
+
+ if (in_vmcs12_store_list && !in_autostore_list) {
+ if (autostore->nr == NR_MSR_ENTRIES) {
+ pr_warn_ratelimited(
+ "Not enough msr entries in msr_autostore. Can't add msr %x\n",
+ msr_index);
+ return;
+ }
+ last = autostore->nr++;
+ autostore->val[last].index = msr_index;
+ } else if (!in_vmcs12_store_list && in_autostore_list) {
+ last = --autostore->nr;
+ autostore->val[i] = autostore->val[last];
+ }
+}
+
static bool nested_cr3_valid(struct kvm_vcpu *vcpu, unsigned long val)
{
unsigned long invalid_mask;
@@ -2027,7 +2099,7 @@ static void prepare_vmcs02_constant_state(struct vcpu_vmx *vmx)
* addresses are constant (for vmcs02), the counts can change based
* on L2's behavior, e.g. switching to/from long mode.
*/
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+ vmcs_write64(VM_EXIT_MSR_STORE_ADDR, __pa(vmx->msr_autostore.guest.val));
vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
@@ -2294,6 +2366,13 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
vmcs_write64(EOI_EXIT_BITMAP3, vmcs12->eoi_exit_bitmap3);
}
+ /*
+ * Make sure the msr_autostore list is up to date before we set the
+ * count in the vmcs02.
+ */
+ prepare_vmx_msr_autostore_list(&vmx->vcpu, MSR_IA32_TSC);
+
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, vmx->msr_autostore.guest.nr);
vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 34b5fef603d8..0ab1562287af 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -230,6 +230,10 @@ struct vcpu_vmx {
struct vmx_msrs host;
} msr_autoload;
+ struct msr_autostore {
+ struct vmx_msrs guest;
+ } msr_autostore;
+
struct {
int vm86_active;
ulong save_rflags;
--
2.24.0.rc0.303.g954a862665-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry()
2019-10-29 21:05 ` [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
@ 2019-11-04 22:41 ` Jim Mattson
2019-11-05 2:01 ` Sean Christopherson
0 siblings, 1 reply; 10+ messages in thread
From: Jim Mattson @ 2019-11-04 22:41 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm list
On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Add the function read_and_check_msr_entry() which just pulls some code
> out of nested_vmx_store_msr() for now, however, this is in preparation
> for a change later in this series were we reuse the code in
> read_and_check_msr_entry().
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> Change-Id: Iaf8787198c06674e8b0555982a962f5bd288e43f
Drop the Change-Id.
Reviewed-by: Jim Mattson <jmattson@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES
2019-10-29 21:05 ` [PATCH 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES Aaron Lewis
@ 2019-11-04 22:42 ` Jim Mattson
0 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2019-11-04 22:42 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm list
On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES. This needs to be done
> due to the addition of the MSR-autostore area that will be added later
> in this series. After that the name AUTOLOAD will no longer make sense.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> Change-Id: Iafe7c3bfb90842a93d7c453a1d8c84a48d5fe7b0
Drop the Change-Id.
Reviewed-by: Jim Mattson <jmattson@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index()
2019-10-29 21:05 ` [PATCH 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
@ 2019-11-04 22:44 ` Jim Mattson
0 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2019-11-04 22:44 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm list
On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Rename function find_msr() to vmx_find_msr_index() to share
s/to share/and export it to share/
> implementations between vmx.c and nested.c in an upcoming change.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> Change-Id: I544afb762ceccb72e950bb62ac2e649a8b0cfec9
Drop the Change-Id.
Reviewed-by: Jim Mattson <jmattson@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC
2019-10-29 21:05 ` [PATCH 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
@ 2019-11-04 22:48 ` Jim Mattson
0 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2019-11-04 22:48 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm list
On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote:
>
> The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the
> vmcs12 MSR VM-exit MSR-store area as a way of determining the highest
> TSC value that might have been observed by L2 prior to VM-exit. The
> current implementation does not capture a very tight bound on this
> value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the
> vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit
> MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area
> during the emulation of an L2->L1 VM-exit, special-case the
> IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02
> VM-exit MSR-store area to derive the value to be stored in the vmcs12
> VM-exit MSR-store area.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> Change-Id: I876e79d98e8f2e5439deafb070be1daa2e1a8e4a
Drop the Change-Id.
> ---
> arch/x86/kvm/vmx/nested.c | 91 ++++++++++++++++++++++++++++++++++++---
> arch/x86/kvm/vmx/vmx.h | 4 ++
> 2 files changed, 89 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 7b058d7b9fcc..19863f2a6588 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -929,6 +929,36 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
> return i + 1;
> }
>
> +static bool nested_vmx_get_msr_value(struct kvm_vcpu *vcpu, u32 msr_index,
> + u64 *data)
Maybe change this to nested_vmx_get_vmexit_msr_value, to clarify that
this is for getting the value of an MSR at emulated VM-exit from L2 ot
L1?
Reviewed-by: Jim Mattson <jmattson@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry()
2019-11-04 22:41 ` Jim Mattson
@ 2019-11-05 2:01 ` Sean Christopherson
0 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2019-11-05 2:01 UTC (permalink / raw)
To: Jim Mattson; +Cc: Aaron Lewis, kvm list
On Mon, Nov 04, 2019 at 02:41:10PM -0800, Jim Mattson wrote:
> On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote:
> >
> > Add the function read_and_check_msr_entry() which just pulls some code
> > out of nested_vmx_store_msr() for now, however, this is in preparation
> > for a change later in this series were we reuse the code in
> > read_and_check_msr_entry().
> >
> > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> > Change-Id: Iaf8787198c06674e8b0555982a962f5bd288e43f
> Drop the Change-Id.
Even better, incoporate checkpatch into your flow, this is one of the few
things that is unequivocally considered an error.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2019-11-05 2:01 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-29 21:05 [PATCH 0/4] Add support for capturing the highest observable L2 TSC Aaron Lewis
2019-10-29 21:05 ` [PATCH 1/4] kvm: nested: Introduce read_and_check_msr_entry() Aaron Lewis
2019-11-04 22:41 ` Jim Mattson
2019-11-05 2:01 ` Sean Christopherson
2019-10-29 21:05 ` [PATCH 2/4] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_MSR_ENTRIES Aaron Lewis
2019-11-04 22:42 ` Jim Mattson
2019-10-29 21:05 ` [PATCH 3/4] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() Aaron Lewis
2019-11-04 22:44 ` Jim Mattson
2019-10-29 21:05 ` [PATCH 4/4] KVM: nVMX: Add support for capturing highest observable L2 TSC Aaron Lewis
2019-11-04 22:48 ` Jim Mattson
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.