* [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode
@ 2015-10-30 15:36 Radim Krčmář
2015-10-30 15:36 ` [PATCH 1/3] KVM: x86: add read_phys to x86_emulate_ops Radim Krčmář
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Radim Krčmář @ 2015-10-30 15:36 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm, Paolo Bonzini, Laszlo Ersek
This series bases on "KVM: x86: fix RSM into 64-bit protected mode,
round 2" and reverts it in [3/3]. To avoid regressions after doing so,
[1/2] introduces a helper that is used in [2/2] to hopefully get the
same behavior.
I'll set up test environment next week, unless a random act of kindness
allows me not to.
Radim Krčmář (3):
KVM: x86: add read_phys to x86_emulate_ops
KVM: x86: handle SMBASE as physical address in RSM
KVM: x86: simplify RSM into 64-bit protected mode
arch/x86/include/asm/kvm_emulate.h | 10 +++++++++
arch/x86/kvm/emulate.c | 44 +++++++++-----------------------------
arch/x86/kvm/x86.c | 10 +++++++++
3 files changed, 30 insertions(+), 34 deletions(-)
--
2.5.3
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] KVM: x86: add read_phys to x86_emulate_ops
2015-10-30 15:36 [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
@ 2015-10-30 15:36 ` Radim Krčmář
2015-10-30 15:36 ` [PATCH 2/3] KVM: x86: handle SMBASE as physical address in RSM Radim Krčmář
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Radim Krčmář @ 2015-10-30 15:36 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm, Paolo Bonzini, Laszlo Ersek
We want to read the physical memory when emulating RSM.
X86EMUL_IO_NEEDED is returned on all errors for consistency with other
helpers.
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
---
arch/x86/include/asm/kvm_emulate.h | 10 ++++++++++
arch/x86/kvm/x86.c | 10 ++++++++++
2 files changed, 20 insertions(+)
diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index e16466ec473c..96f1d1c5e6cb 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -112,6 +112,16 @@ struct x86_emulate_ops {
struct x86_exception *fault);
/*
+ * read_phys: Read bytes of standard (non-emulated/special) memory.
+ * Used for descriptor reading.
+ * @addr: [IN ] Physical address from which to read.
+ * @val: [OUT] Value read from memory.
+ * @bytes: [IN ] Number of bytes to read from memory.
+ */
+ int (*read_phys)(struct x86_emulate_ctxt *ctxt, unsigned long addr,
+ void *val, unsigned int bytes);
+
+ /*
* write_std: Write bytes of standard (non-emulated/special) memory.
* Used for descriptor writing.
* @addr: [IN ] Linear address to which to write.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 441cb9d4ec8a..ae5af651af89 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4097,6 +4097,15 @@ static int kvm_read_guest_virt_system(struct x86_emulate_ctxt *ctxt,
return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, 0, exception);
}
+static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt,
+ unsigned long addr, void *val, unsigned int bytes)
+{
+ struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+ int r = kvm_vcpu_read_guest(vcpu, addr, val, bytes);
+
+ return r < 0 ? X86EMUL_IO_NEEDED : X86EMUL_CONTINUE;
+}
+
int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
gva_t addr, void *val,
unsigned int bytes,
@@ -4832,6 +4841,7 @@ static const struct x86_emulate_ops emulate_ops = {
.write_gpr = emulator_write_gpr,
.read_std = kvm_read_guest_virt_system,
.write_std = kvm_write_guest_virt_system,
+ .read_phys = kvm_read_guest_phys_system,
.fetch = kvm_fetch_guest_virt,
.read_emulated = emulator_read_emulated,
.write_emulated = emulator_write_emulated,
--
2.5.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] KVM: x86: handle SMBASE as physical address in RSM
2015-10-30 15:36 [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
2015-10-30 15:36 ` [PATCH 1/3] KVM: x86: add read_phys to x86_emulate_ops Radim Krčmář
@ 2015-10-30 15:36 ` Radim Krčmář
2015-10-30 15:36 ` [PATCH 3/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
2015-10-31 19:50 ` [PATCH 0/3] " Laszlo Ersek
3 siblings, 0 replies; 8+ messages in thread
From: Radim Krčmář @ 2015-10-30 15:36 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm, Paolo Bonzini, Laszlo Ersek
GET_SMSTATE depends on real mode to ensure that smbase+offset is treated
as a physical address, which has already caused a bug after shuffling
the code. Enforce physical addressing.
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
---
arch/x86/kvm/emulate.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 25e16b6f6ffa..59e80e0de865 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2272,8 +2272,8 @@ static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt)
#define GET_SMSTATE(type, smbase, offset) \
({ \
type __val; \
- int r = ctxt->ops->read_std(ctxt, smbase + offset, &__val, \
- sizeof(__val), NULL); \
+ int r = ctxt->ops->read_phys(ctxt, smbase + offset, &__val, \
+ sizeof(__val)); \
if (r != X86EMUL_CONTINUE) \
return X86EMUL_UNHANDLEABLE; \
__val; \
@@ -2507,8 +2507,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
/*
* Get back to real mode, to prepare a safe state in which to load
- * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed
- * to read_std/write_std are not virtual.
+ * CR0/CR3/CR4/EFER.
*
* CR4.PCIDE must be zero, because it is a 64-bit mode only feature.
*/
--
2.5.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] KVM: x86: simplify RSM into 64-bit protected mode
2015-10-30 15:36 [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
2015-10-30 15:36 ` [PATCH 1/3] KVM: x86: add read_phys to x86_emulate_ops Radim Krčmář
2015-10-30 15:36 ` [PATCH 2/3] KVM: x86: handle SMBASE as physical address in RSM Radim Krčmář
@ 2015-10-30 15:36 ` Radim Krčmář
2015-10-31 19:50 ` [PATCH 0/3] " Laszlo Ersek
3 siblings, 0 replies; 8+ messages in thread
From: Radim Krčmář @ 2015-10-30 15:36 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm, Paolo Bonzini, Laszlo Ersek
This reverts 0123456789abc ("KVM: x86: fix RSM into 64-bit protected
mode, round 2"). We've achieved the same by treating SMBASE as a
physical address in the previous patch.
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
---
arch/x86/kvm/emulate.c | 37 +++++++------------------------------
1 file changed, 7 insertions(+), 30 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 59e80e0de865..b60fed56671b 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2311,16 +2311,7 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
return X86EMUL_CONTINUE;
}
-struct rsm_stashed_seg_64 {
- u16 selector;
- struct desc_struct desc;
- u32 base3;
-};
-
-static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt,
- struct rsm_stashed_seg_64 *stash,
- u64 smbase,
- int n)
+static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
{
struct desc_struct desc;
int offset;
@@ -2335,20 +2326,10 @@ static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt,
set_desc_base(&desc, GET_SMSTATE(u32, smbase, offset + 8));
base3 = GET_SMSTATE(u32, smbase, offset + 12);
- stash[n].selector = selector;
- stash[n].desc = desc;
- stash[n].base3 = base3;
+ ctxt->ops->set_segment(ctxt, selector, &desc, base3, n);
return X86EMUL_CONTINUE;
}
-static inline void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt,
- struct rsm_stashed_seg_64 *stash,
- int n)
-{
- ctxt->ops->set_segment(ctxt, stash[n].selector, &stash[n].desc,
- stash[n].base3, n);
-}
-
static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
u64 cr0, u64 cr4)
{
@@ -2438,7 +2419,6 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
u32 base3;
u16 selector;
int i, r;
- struct rsm_stashed_seg_64 stash[6];
for (i = 0; i < 16; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8);
@@ -2480,18 +2460,15 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
dt.address = GET_SMSTATE(u64, smbase, 0x7e68);
ctxt->ops->set_gdt(ctxt, &dt);
- for (i = 0; i < ARRAY_SIZE(stash); i++) {
- r = rsm_stash_seg_64(ctxt, stash, smbase, i);
- if (r != X86EMUL_CONTINUE)
- return r;
- }
-
r = rsm_enter_protected_mode(ctxt, cr0, cr4);
if (r != X86EMUL_CONTINUE)
return r;
- for (i = 0; i < ARRAY_SIZE(stash); i++)
- rsm_load_seg_64(ctxt, stash, i);
+ for (i = 0; i < 6; i++) {
+ r = rsm_load_seg_64(ctxt, smbase, i);
+ if (r != X86EMUL_CONTINUE)
+ return r;
+ }
return X86EMUL_CONTINUE;
}
--
2.5.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode
2015-10-30 15:36 [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
` (2 preceding siblings ...)
2015-10-30 15:36 ` [PATCH 3/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
@ 2015-10-31 19:50 ` Laszlo Ersek
2015-11-02 9:32 ` Paolo Bonzini
3 siblings, 1 reply; 8+ messages in thread
From: Laszlo Ersek @ 2015-10-31 19:50 UTC (permalink / raw)
To: Radim Krčmář, linux-kernel; +Cc: kvm, Paolo Bonzini
On 10/30/15 16:36, Radim Krčmář wrote:
> This series bases on "KVM: x86: fix RSM into 64-bit protected mode,
> round 2" and reverts it in [3/3]. To avoid regressions after doing so,
> [1/2] introduces a helper that is used in [2/2] to hopefully get the
> same behavior.
>
> I'll set up test environment next week, unless a random act of kindness
> allows me not to.
>
>
> Radim Krčmář (3):
> KVM: x86: add read_phys to x86_emulate_ops
> KVM: x86: handle SMBASE as physical address in RSM
> KVM: x86: simplify RSM into 64-bit protected mode
>
> arch/x86/include/asm/kvm_emulate.h | 10 +++++++++
> arch/x86/kvm/emulate.c | 44 +++++++++-----------------------------
> arch/x86/kvm/x86.c | 10 +++++++++
> 3 files changed, 30 insertions(+), 34 deletions(-)
>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Thanks, Radim.
Laszlo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode
2015-10-31 19:50 ` [PATCH 0/3] " Laszlo Ersek
@ 2015-11-02 9:32 ` Paolo Bonzini
2015-11-03 9:40 ` Laszlo Ersek
0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2015-11-02 9:32 UTC (permalink / raw)
To: Laszlo Ersek, Radim Krčmář, linux-kernel; +Cc: kvm
On 31/10/2015 20:50, Laszlo Ersek wrote:
> Tested-by: Laszlo Ersek <lersek@redhat.com>
Thanks Laszlo, I applied patches 1 and 2 (since your "part 2" never was :)).
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode
2015-11-02 9:32 ` Paolo Bonzini
@ 2015-11-03 9:40 ` Laszlo Ersek
2015-11-03 10:00 ` Paolo Bonzini
0 siblings, 1 reply; 8+ messages in thread
From: Laszlo Ersek @ 2015-11-03 9:40 UTC (permalink / raw)
To: Paolo Bonzini, Radim Krčmář, linux-kernel; +Cc: kvm
On 11/02/15 10:32, Paolo Bonzini wrote:
>
>
> On 31/10/2015 20:50, Laszlo Ersek wrote:
>> Tested-by: Laszlo Ersek <lersek@redhat.com>
>
> Thanks Laszlo, I applied patches 1 and 2 (since your "part 2" never was :)).
>
> Paolo
>
Thanks.
Since you can rebase the queue freely, can you please also add:
Reported-by: Laszlo Ersek <lersek@redhat.com>
to Radim's patch "KVM: x86: handle SMBASE as physical address in RSM"?
Thanks
Laszlo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode
2015-11-03 9:40 ` Laszlo Ersek
@ 2015-11-03 10:00 ` Paolo Bonzini
0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2015-11-03 10:00 UTC (permalink / raw)
To: Laszlo Ersek, Radim Krčmář, linux-kernel; +Cc: kvm
On 03/11/2015 10:40, Laszlo Ersek wrote:
> On 11/02/15 10:32, Paolo Bonzini wrote:
>>
>>
>> On 31/10/2015 20:50, Laszlo Ersek wrote:
>>> Tested-by: Laszlo Ersek <lersek@redhat.com>
>>
>> Thanks Laszlo, I applied patches 1 and 2 (since your "part 2" never was :)).
>>
>> Paolo
>>
>
> Thanks.
>
> Since you can rebase the queue freely, can you please also add:
>
> Reported-by: Laszlo Ersek <lersek@redhat.com>
>
> to Radim's patch "KVM: x86: handle SMBASE as physical address in RSM"?
Sure, will do.
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2015-11-03 10:01 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-30 15:36 [PATCH 0/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
2015-10-30 15:36 ` [PATCH 1/3] KVM: x86: add read_phys to x86_emulate_ops Radim Krčmář
2015-10-30 15:36 ` [PATCH 2/3] KVM: x86: handle SMBASE as physical address in RSM Radim Krčmář
2015-10-30 15:36 ` [PATCH 3/3] KVM: x86: simplify RSM into 64-bit protected mode Radim Krčmář
2015-10-31 19:50 ` [PATCH 0/3] " Laszlo Ersek
2015-11-02 9:32 ` Paolo Bonzini
2015-11-03 9:40 ` Laszlo Ersek
2015-11-03 10:00 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.