All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests
@ 2013-10-08 14:12 ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patch enables the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (3):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests

 arch/powerpc/include/asm/kvm_book3s.h   |   33 +++++++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 +++++++++++----
 8 files changed, 71 insertions(+), 14 deletions(-)

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v2 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests
@ 2013-10-08 14:12 ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patch enables the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (3):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests

 arch/powerpc/include/asm/kvm_book3s.h   |   33 +++++++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 +++++++++++----
 8 files changed, 71 insertions(+), 14 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v2 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions
  2013-10-08 14:12 ` Cédric Le Goater
@ 2013-10-08 14:12   ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

This patch adds an helper routine kvmppc_ld_inst() to load an
instruction form the guest. This routine will be modified in
the next patch to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   16 ++++++++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/book3s_pr.c          |    2 +-
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..dfe8f11 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,18 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *ptr, bool data)
+{
+	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
+static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *inst)
+{
+	return kvmppc_ld32(vcpu, eaddr, inst, false);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +289,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld_inst(vcpu, &pc, &vcpu->arch.last_inst);
 
 	return vcpu->arch.last_inst;
 }
@@ -294,7 +306,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld_inst(vcpu, &pc, &vcpu->arch.last_inst);
 
 	return vcpu->arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..0083cd0 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * If we fail, we just return to the guest and try executing it again.
 	 */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) {
-		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+		ret = kvmppc_ld_inst(vcpu, &srr0, &last_inst);
 		if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
 			return RESUME_GUEST;
 		vcpu->arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..a817ef6 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+	ret = kvmppc_ld_inst(vcpu, &srr0, &last_inst);
 	if (ret == -ENOENT) {
 		ulong msr = vcpu->arch.shared->msr;
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v2 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions
@ 2013-10-08 14:12   ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

This patch adds an helper routine kvmppc_ld_inst() to load an
instruction form the guest. This routine will be modified in
the next patch to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   16 ++++++++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/book3s_pr.c          |    2 +-
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..dfe8f11 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,18 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *ptr, bool data)
+{
+	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
+static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *inst)
+{
+	return kvmppc_ld32(vcpu, eaddr, inst, false);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +289,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld_inst(vcpu, &pc, &vcpu->arch.last_inst);
 
 	return vcpu->arch.last_inst;
 }
@@ -294,7 +306,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld_inst(vcpu, &pc, &vcpu->arch.last_inst);
 
 	return vcpu->arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..0083cd0 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * If we fail, we just return to the guest and try executing it again.
 	 */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED) {
-		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+		ret = kvmppc_ld_inst(vcpu, &srr0, &last_inst);
 		if (ret != EMULATE_DONE || last_inst = KVM_INST_FETCH_FAILED)
 			return RESUME_GUEST;
 		vcpu->arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..a817ef6 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+	ret = kvmppc_ld_inst(vcpu, &srr0, &last_inst);
 	if (ret = -ENOENT) {
 		ulong msr = vcpu->arch.shared->msr;
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S: add helper routines to detect endian
  2013-10-08 14:12 ` Cédric Le Goater
@ 2013-10-08 14:12   ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index dfe8f11..4ee6c66 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.shared->msr & MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+	return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v2 2/3] KVM: PPC: Book3S: add helper routines to detect endian
@ 2013-10-08 14:12   ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index dfe8f11..4ee6c66 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.shared->msr & MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+	return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 14:12 ` Cédric Le Goater
@ 2013-10-08 14:12   ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4ee6c66..f043e62 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *inst)
 {
-	return kvmppc_ld32(vcpu, eaddr, inst, false);
+	int ret;
+
+	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*inst = swab32(*inst);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 14:12   ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 14:12 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4ee6c66..f043e62 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *inst)
 {
-	return kvmppc_ld32(vcpu, eaddr, inst, false);
+	int ret;
+
+	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*inst = swab32(*inst);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 14:12   ` Cédric Le Goater
@ 2013-10-08 14:25     ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-08 14:25 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian mode,
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 46 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 4ee6c66..f043e62 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *inst)
> {
> -	return kvmppc_ld32(vcpu, eaddr, inst, false);
> +	int ret;
> +
> +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*inst = swab32(*inst);

This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 14:25     ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-08 14:25 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian mode,
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 46 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 4ee6c66..f043e62 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *inst)
> {
> -	return kvmppc_ld32(vcpu, eaddr, inst, false);
> +	int ret;
> +
> +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*inst = swab32(*inst);

This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 14:25     ` Alexander Graf
@ 2013-10-08 15:07       ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-10-08 15:07 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 10/08/2013 04:25 PM, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 4ee6c66..f043e62 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *inst)
>> {
>> -	return kvmppc_ld32(vcpu, eaddr, inst, false);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a 
> byteswap, regardless of whether it's an instruction or not.

yes. the byteswap logic is not related to instruction or data. I will move it in 
kvmppc_ld32(). 

Thanks Alex,

C.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 15:07       ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-10-08 15:07 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 10/08/2013 04:25 PM, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 4ee6c66..f043e62 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *inst)
>> {
>> -	return kvmppc_ld32(vcpu, eaddr, inst, false);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a 
> byteswap, regardless of whether it's an instruction or not.

yes. the byteswap logic is not related to instruction or data. I will move it in 
kvmppc_ld32(). 

Thanks Alex,

C.


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v3 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 15:07       ` Cedric Le Goater
@ 2013-10-08 15:31         ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 15:31 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4ee6c66..9403042 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+	int ret;
+
+	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*ptr = swab32(*ptr);
+
+	return ret;
 }
 
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v3 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 15:31         ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 15:31 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4ee6c66..9403042 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+	int ret;
+
+	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*ptr = swab32(*ptr);
+
+	return ret;
 }
 
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v3 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 15:31         ` Cédric Le Goater
@ 2013-10-08 15:36           ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-08 15:36 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 08.10.2013, at 17:31, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian mode,
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 46 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 4ee6c66..9403042 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +	int ret;
> +
> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*ptr = swab32(*ptr);
> +
> +	return ret;
> }
> 
> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,

... at which point this helper is pretty useless ;).


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v3 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 15:36           ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-08 15:36 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 08.10.2013, at 17:31, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian mode,
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 46 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 4ee6c66..9403042 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +	int ret;
> +
> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*ptr = swab32(*ptr);
> +
> +	return ret;
> }
> 
> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,

... at which point this helper is pretty useless ;).


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v3 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 15:36           ` Alexander Graf
@ 2013-10-08 16:10             ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-10-08 16:10 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 10/08/2013 05:36 PM, Alexander Graf wrote:
> 
> On 08.10.2013, at 17:31, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 4ee6c66..9403042 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *ptr, bool data)
>> {
>> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*ptr = swab32(*ptr);
>> +
>> +	return ret;
>> }
>>
>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> 
> ... at which point this helper is pretty useless ;).

ok. This sounds like a request for a v4 ... :)

C.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v3 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 16:10             ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-10-08 16:10 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 10/08/2013 05:36 PM, Alexander Graf wrote:
> 
> On 08.10.2013, at 17:31, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 4ee6c66..9403042 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *ptr, bool data)
>> {
>> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*ptr = swab32(*ptr);
>> +
>> +	return ret;
>> }
>>
>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> 
> ... at which point this helper is pretty useless ;).

ok. This sounds like a request for a v4 ... :)

C.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v4 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests
  2013-10-08 16:10             ` Cedric Le Goater
@ 2013-10-08 16:43               ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patch enables the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (3):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests

 arch/powerpc/include/asm/kvm_book3s.h   |   27 +++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 8 files changed, 65 insertions(+), 14 deletions(-)

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v4 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests
@ 2013-10-08 16:43               ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patch enables the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (3):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests

 arch/powerpc/include/asm/kvm_book3s.h   |   27 +++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 8 files changed, 65 insertions(+), 14 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v4 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions
  2013-10-08 16:43               ` Cédric Le Goater
@ 2013-10-08 16:43               ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

This patch adds an helper routine kvmppc_ld32() to load an
instruction form the guest. This routine will be modified in
the next patch to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/book3s_pr.c          |    2 +-
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..d11c089 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,12 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *ptr, bool data)
+{
+	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +283,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
@@ -294,7 +300,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..ff53031 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * If we fail, we just return to the guest and try executing it again.
 	 */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) {
-		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+		ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 		if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
 			return RESUME_GUEST;
 		vcpu->arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..01ed005 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+	ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 	if (ret == -ENOENT) {
 		ulong msr = vcpu->arch.shared->msr;
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v4 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions
@ 2013-10-08 16:43               ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

This patch adds an helper routine kvmppc_ld32() to load an
instruction form the guest. This routine will be modified in
the next patch to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/book3s_pr.c          |    2 +-
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..d11c089 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,12 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *ptr, bool data)
+{
+	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +283,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
@@ -294,7 +300,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..ff53031 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * If we fail, we just return to the guest and try executing it again.
 	 */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED) {
-		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+		ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 		if (ret != EMULATE_DONE || last_inst = KVM_INST_FETCH_FAILED)
 			return RESUME_GUEST;
 		vcpu->arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..01ed005 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+	ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 	if (ret = -ENOENT) {
 		ulong msr = vcpu->arch.shared->msr;
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v4 2/3] KVM: PPC: Book3S: add helper routines to detect endian order
  2013-10-08 16:43               ` Cédric Le Goater
@ 2013-10-08 16:43               ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d11c089..22ec875 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.shared->msr & MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+	return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v4 2/3] KVM: PPC: Book3S: add helper routines to detect endian order
@ 2013-10-08 16:43               ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d11c089..22ec875 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.shared->msr & MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+	return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v4 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 16:43               ` Cédric Le Goater
@ 2013-10-08 16:43               ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 22ec875..ac06434 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+	int ret;
+
+	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*ptr = swab32(*ptr);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v4 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 16:43               ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-10-08 16:43 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 22ec875..ac06434 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+	int ret;
+
+	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*ptr = swab32(*ptr);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 14:25     ` Alexander Graf
@ 2013-10-08 23:31       ` Paul Mackerras
  -1 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-10-08 23:31 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
> > MMIO emulation reads the last instruction executed by the guest
> > and then emulates. If the guest is running in Little Endian mode,
> > the instruction needs to be byte-swapped before being emulated.
> > 
> > This patch stores the last instruction in the endian order of the
> > host, primarily doing a byte-swap if needed. The common code
> > which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> > and the exit paths for the Book3S PV and HR guests use their own
> > version in assembly.
> > 
> > Finally, the meaning of the 'is_bigendian' argument of the
> > routines kvmppc_handle_load() of kvmppc_handle_store() is
> > slightly changed to represent an eventual reverse operation. This
> > is used in conjunction with kvmppc_is_bigendian() to determine if
> > the instruction being emulated should be byte-swapped.
> > 
> > Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> > ---
> > 
> > Changes in v2:
> > 
> > - replaced rldicl. by andi. to test the MSR_LE bit in the guest
> >   exit paths. (Paul Mackerras)
> > 
> > - moved the byte swapping logic to kvmppc_handle_load() and 
> >   kvmppc_handle_load() by changing the is_bigendian parameter
> >   meaning. (Paul Mackerras)
> > 
> > arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> > arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> > arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> > arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> > arch/powerpc/kvm/emulate.c              |    1 -
> > arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> > 6 files changed, 46 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 4ee6c66..f043e62 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> > static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> > 			      u32 *inst)
> > {
> > -	return kvmppc_ld32(vcpu, eaddr, inst, false);
> > +	int ret;
> > +
> > +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> > +
> > +	if (kvmppc_need_byteswap(vcpu))
> > +		*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.

True, until we get to POWER8 with its split little-endian support,
where instructions and data can have different endianness...

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 23:31       ` Paul Mackerras
  0 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-10-08 23:31 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
> > MMIO emulation reads the last instruction executed by the guest
> > and then emulates. If the guest is running in Little Endian mode,
> > the instruction needs to be byte-swapped before being emulated.
> > 
> > This patch stores the last instruction in the endian order of the
> > host, primarily doing a byte-swap if needed. The common code
> > which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> > and the exit paths for the Book3S PV and HR guests use their own
> > version in assembly.
> > 
> > Finally, the meaning of the 'is_bigendian' argument of the
> > routines kvmppc_handle_load() of kvmppc_handle_store() is
> > slightly changed to represent an eventual reverse operation. This
> > is used in conjunction with kvmppc_is_bigendian() to determine if
> > the instruction being emulated should be byte-swapped.
> > 
> > Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> > ---
> > 
> > Changes in v2:
> > 
> > - replaced rldicl. by andi. to test the MSR_LE bit in the guest
> >   exit paths. (Paul Mackerras)
> > 
> > - moved the byte swapping logic to kvmppc_handle_load() and 
> >   kvmppc_handle_load() by changing the is_bigendian parameter
> >   meaning. (Paul Mackerras)
> > 
> > arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> > arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> > arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> > arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> > arch/powerpc/kvm/emulate.c              |    1 -
> > arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> > 6 files changed, 46 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 4ee6c66..f043e62 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> > static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> > 			      u32 *inst)
> > {
> > -	return kvmppc_ld32(vcpu, eaddr, inst, false);
> > +	int ret;
> > +
> > +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> > +
> > +	if (kvmppc_need_byteswap(vcpu))
> > +		*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.

True, until we get to POWER8 with its split little-endian support,
where instructions and data can have different endianness...

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 23:31       ` Paul Mackerras
@ 2013-10-08 23:46         ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-08 23:46 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list



Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:

> On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
>> 
>> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
>> 
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>> 
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>> 
>>> Finally, the meaning of the 'is_bigendian' argument of the
>>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>>> slightly changed to represent an eventual reverse operation. This
>>> is used in conjunction with kvmppc_is_bigendian() to determine if
>>> the instruction being emulated should be byte-swapped.
>>> 
>>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>>> ---
>>> 
>>> Changes in v2:
>>> 
>>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>>  exit paths. (Paul Mackerras)
>>> 
>>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>>  kvmppc_handle_load() by changing the is_bigendian parameter
>>>  meaning. (Paul Mackerras)
>>> 
>>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>>> arch/powerpc/kvm/emulate.c              |    1 -
>>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 4ee6c66..f043e62 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>>>                  u32 *inst)
>>> {
>>> -    return kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +    int ret;
>>> +
>>> +    ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +
>>> +    if (kvmppc_need_byteswap(vcpu))
>>> +        *inst = swab32(*inst);
>> 
>> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.
> 
> True, until we get to POWER8 with its split little-endian support,
> where instructions and data can have different endianness...

How exactly does that work?

Alex

> 
> Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-08 23:46         ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-08 23:46 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list



Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:

> On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
>> 
>> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
>> 
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>> 
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>> 
>>> Finally, the meaning of the 'is_bigendian' argument of the
>>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>>> slightly changed to represent an eventual reverse operation. This
>>> is used in conjunction with kvmppc_is_bigendian() to determine if
>>> the instruction being emulated should be byte-swapped.
>>> 
>>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>>> ---
>>> 
>>> Changes in v2:
>>> 
>>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>>  exit paths. (Paul Mackerras)
>>> 
>>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>>  kvmppc_handle_load() by changing the is_bigendian parameter
>>>  meaning. (Paul Mackerras)
>>> 
>>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>>> arch/powerpc/kvm/emulate.c              |    1 -
>>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 4ee6c66..f043e62 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>>>                  u32 *inst)
>>> {
>>> -    return kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +    int ret;
>>> +
>>> +    ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +
>>> +    if (kvmppc_need_byteswap(vcpu))
>>> +        *inst = swab32(*inst);
>> 
>> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.
> 
> True, until we get to POWER8 with its split little-endian support,
> where instructions and data can have different endianness...

How exactly does that work?

Alex

> 
> Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-08 23:46         ` Alexander Graf
@ 2013-10-09  5:59           ` Paul Mackerras
  -1 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-10-09  5:59 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
> 
> > True, until we get to POWER8 with its split little-endian support,
> > where instructions and data can have different endianness...
> 
> How exactly does that work?

They added an extra MSR bit called SLE which enables the split-endian
mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
LE bit controls instruction endianness, and data endianness depends on
LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
LE=0 you get little-endian data and big-endian instructions, and vice
versa with SLE=1 and LE=1.

There is also a user accessible "mtsle" instruction that sets the
value of the SLE bit.  This enables programs to flip their data
endianness back and forth quickly, so it's usable for short
instruction sequences, without the need to generate instructions of
the opposite endianness.

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-09  5:59           ` Paul Mackerras
  0 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-10-09  5:59 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
> 
> > True, until we get to POWER8 with its split little-endian support,
> > where instructions and data can have different endianness...
> 
> How exactly does that work?

They added an extra MSR bit called SLE which enables the split-endian
mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
LE bit controls instruction endianness, and data endianness depends on
LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
LE=0 you get little-endian data and big-endian instructions, and vice
versa with SLE=1 and LE=1.

There is also a user accessible "mtsle" instruction that sets the
value of the SLE bit.  This enables programs to flip their data
endianness back and forth quickly, so it's usable for short
instruction sequences, without the need to generate instructions of
the opposite endianness.

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-09  5:59           ` Paul Mackerras
@ 2013-10-09  8:29             ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-09  8:29 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list



Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:

> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>> 
>>> True, until we get to POWER8 with its split little-endian support,
>>> where instructions and data can have different endianness...
>> 
>> How exactly does that work?
> 
> They added an extra MSR bit called SLE which enables the split-endian
> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> LE bit controls instruction endianness, and data endianness depends on
> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> LE=0 you get little-endian data and big-endian instructions, and vice
> versa with SLE=1 and LE=1.

So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?


Alex

> 
> There is also a user accessible "mtsle" instruction that sets the
> value of the SLE bit.  This enables programs to flip their data
> endianness back and forth quickly, so it's usable for short
> instruction sequences, without the need to generate instructions of
> the opposite endianness.
> 
> Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-09  8:29             ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-10-09  8:29 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list



Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:

> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>> 
>>> True, until we get to POWER8 with its split little-endian support,
>>> where instructions and data can have different endianness...
>> 
>> How exactly does that work?
> 
> They added an extra MSR bit called SLE which enables the split-endian
> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> LE bit controls instruction endianness, and data endianness depends on
> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> LE=0 you get little-endian data and big-endian instructions, and vice
> versa with SLE=1 and LE=1.

So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?


Alex

> 
> There is also a user accessible "mtsle" instruction that sets the
> value of the SLE bit.  This enables programs to flip their data
> endianness back and forth quickly, so it's usable for short
> instruction sequences, without the need to generate instructions of
> the opposite endianness.
> 
> Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-09  8:29             ` Alexander Graf
@ 2013-10-09  8:42               ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-10-09  8:42 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 10/09/2013 10:29 AM, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
> 
>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>
>>>> True, until we get to POWER8 with its split little-endian support,
>>>> where instructions and data can have different endianness...
>>>
>>> How exactly does that work?
>>
>> They added an extra MSR bit called SLE which enables the split-endian
>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>> LE bit controls instruction endianness, and data endianness depends on
>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>> LE=0 you get little-endian data and big-endian instructions, and vice
>> versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and 
> swaps even the vcpu cached version if it's set, no?
 
Here is the table (PowerISA) illustrating the endian modes for all 
combinations :

	SLE	LE	Data	Instruction

	0	0	Big	Big
	0	1	Little	Little
	1	0	Little	Big
	1	1	Big	Little


My understanding is that when reading instructions, we should test MSR[LE] 
and for data, test MSR[SLE] ^ MSR[LE].

This has to be done in conjunction with the host endian order to determine 
if we should byte-swap or not, but we can assume the host is big endian
for the moment and fix the byte-swapping later.

C.



>>
>> There is also a user accessible "mtsle" instruction that sets the
>> value of the SLE bit.  This enables programs to flip their data
>> endianness back and forth quickly, so it's usable for short
>> instruction sequences, without the need to generate instructions of
>> the opposite endianness.
>>
>> Paul.
> 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-09  8:42               ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-10-09  8:42 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 10/09/2013 10:29 AM, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
> 
>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>
>>>> True, until we get to POWER8 with its split little-endian support,
>>>> where instructions and data can have different endianness...
>>>
>>> How exactly does that work?
>>
>> They added an extra MSR bit called SLE which enables the split-endian
>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>> LE bit controls instruction endianness, and data endianness depends on
>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>> LE=0 you get little-endian data and big-endian instructions, and vice
>> versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and 
> swaps even the vcpu cached version if it's set, no?
 
Here is the table (PowerISA) illustrating the endian modes for all 
combinations :

	SLE	LE	Data	Instruction

	0	0	Big	Big
	0	1	Little	Little
	1	0	Little	Big
	1	1	Big	Little


My understanding is that when reading instructions, we should test MSR[LE] 
and for data, test MSR[SLE] ^ MSR[LE].

This has to be done in conjunction with the host endian order to determine 
if we should byte-swap or not, but we can assume the host is big endian
for the moment and fix the byte-swapping later.

C.



>>
>> There is also a user accessible "mtsle" instruction that sets the
>> value of the SLE bit.  This enables programs to flip their data
>> endianness back and forth quickly, so it's usable for short
>> instruction sequences, without the need to generate instructions of
>> the opposite endianness.
>>
>> Paul.
> 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-09  8:29             ` Alexander Graf
@ 2013-10-10 10:16               ` Paul Mackerras
  -1 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-10-10 10:16 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
> 
> > On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> >> 
> >> 
> >> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
> >> 
> >>> True, until we get to POWER8 with its split little-endian support,
> >>> where instructions and data can have different endianness...
> >> 
> >> How exactly does that work?
> > 
> > They added an extra MSR bit called SLE which enables the split-endian
> > mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> > LE bit controls instruction endianness, and data endianness depends on
> > LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> > LE=0 you get little-endian data and big-endian instructions, and vice
> > versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?

Not exactly; instruction endianness depends only on MSR[LE], so
get_last_inst should not look at MSR[SLE].  I would think the vcpu
cached version should be host endian always.

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-10-10 10:16               ` Paul Mackerras
  0 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-10-10 10:16 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
> 
> > On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> >> 
> >> 
> >> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
> >> 
> >>> True, until we get to POWER8 with its split little-endian support,
> >>> where instructions and data can have different endianness...
> >> 
> >> How exactly does that work?
> > 
> > They added an extra MSR bit called SLE which enables the split-endian
> > mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> > LE bit controls instruction endianness, and data endianness depends on
> > LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> > LE=0 you get little-endian data and big-endian instructions, and vice
> > versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?

Not exactly; instruction endianness depends only on MSR[LE], so
get_last_inst should not look at MSR[SLE].  I would think the vcpu
cached version should be host endian always.

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-10-10 10:16               ` Paul Mackerras
@ 2013-11-04 11:44                 ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-11-04 11:44 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list


On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:

> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>> 
>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>> 
>>>> 
>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>> 
>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>> where instructions and data can have different endianness...
>>>> 
>>>> How exactly does that work?
>>> 
>>> They added an extra MSR bit called SLE which enables the split-endian
>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>> LE bit controls instruction endianness, and data endianness depends on
>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>> versa with SLE=1 and LE=1.
>> 
>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
> 
> Not exactly; instruction endianness depends only on MSR[LE], so
> get_last_inst should not look at MSR[SLE].  I would think the vcpu
> cached version should be host endian always.

I agree. It makes the code flow easier.


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-11-04 11:44                 ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-11-04 11:44 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Cédric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list


On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:

> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>> 
>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>> 
>>>> 
>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>> 
>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>> where instructions and data can have different endianness...
>>>> 
>>>> How exactly does that work?
>>> 
>>> They added an extra MSR bit called SLE which enables the split-endian
>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>> LE bit controls instruction endianness, and data endianness depends on
>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>> versa with SLE=1 and LE=1.
>> 
>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
> 
> Not exactly; instruction endianness depends only on MSR[LE], so
> get_last_inst should not look at MSR[SLE].  I would think the vcpu
> cached version should be host endian always.

I agree. It makes the code flow easier.


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-11-04 11:44                 ` Alexander Graf
@ 2013-11-05 12:28                   ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-11-05 12:28 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 11/04/2013 12:44 PM, Alexander Graf wrote:
> 
> On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:
> 
>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>>>
>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>
>>>>>
>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>>>
>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>> where instructions and data can have different endianness...
>>>>>
>>>>> How exactly does that work?
>>>>
>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>> LE bit controls instruction endianness, and data endianness depends on
>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>> versa with SLE=1 and LE=1.
>>>
>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>
>> Not exactly; instruction endianness depends only on MSR[LE], so
>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>> cached version should be host endian always.
> 
> I agree. It makes the code flow easier.


To take into account the host endian order to determine if we should 
byteswap, we could modify kvmppc_need_byteswap() as follow :


+/*
+ * Compare endian order of host and guest to determine whether we need
+ * to byteswap or not
+ */
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-       return vcpu->arch.shared->msr & MSR_LE;
+       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
+               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
 }



and I think MSR[SLE] could be handled this way :


 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
@@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
                              u32 *ptr, bool data)
 {
        int ret;
+       bool byteswap;
 
        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-       if (kvmppc_need_byteswap(vcpu))
+       byteswap = kvmppc_need_byteswap(vcpu);
+
+       /* if we are loading data from a guest which is in Split
+        * Little Endian mode, the byte order is reversed 
+        */
+       if (data && (vcpu->arch.shared->msr & MSR_SLE))
+               byteswap = !byteswap;
+
+       if (byteswap)
                *ptr = swab32(*ptr);
 
        return ret;


How does that look ? 

This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Thanks,

C.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-11-05 12:28                   ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-11-05 12:28 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 11/04/2013 12:44 PM, Alexander Graf wrote:
> 
> On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:
> 
>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>>>
>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>
>>>>>
>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>>>
>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>> where instructions and data can have different endianness...
>>>>>
>>>>> How exactly does that work?
>>>>
>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>> LE bit controls instruction endianness, and data endianness depends on
>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>> versa with SLE=1 and LE=1.
>>>
>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>
>> Not exactly; instruction endianness depends only on MSR[LE], so
>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>> cached version should be host endian always.
> 
> I agree. It makes the code flow easier.


To take into account the host endian order to determine if we should 
byteswap, we could modify kvmppc_need_byteswap() as follow :


+/*
+ * Compare endian order of host and guest to determine whether we need
+ * to byteswap or not
+ */
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-       return vcpu->arch.shared->msr & MSR_LE;
+       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
+               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
 }



and I think MSR[SLE] could be handled this way :


 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
@@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
                              u32 *ptr, bool data)
 {
        int ret;
+       bool byteswap;
 
        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-       if (kvmppc_need_byteswap(vcpu))
+       byteswap = kvmppc_need_byteswap(vcpu);
+
+       /* if we are loading data from a guest which is in Split
+        * Little Endian mode, the byte order is reversed 
+        */
+       if (data && (vcpu->arch.shared->msr & MSR_SLE))
+               byteswap = !byteswap;
+
+       if (byteswap)
                *ptr = swab32(*ptr);
 
        return ret;


How does that look ? 

This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Thanks,

C.


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-11-05 12:28                   ` Cedric Le Goater
@ 2013-11-05 13:01                     ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-11-05 13:01 UTC (permalink / raw)
  To: Cedric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:

> On 11/04/2013 12:44 PM, Alexander Graf wrote:
>> 
>> On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:
>> 
>>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>> 
>>>> 
>>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>>>> 
>>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>> 
>>>>>> 
>>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>>>> 
>>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>>> where instructions and data can have different endianness...
>>>>>> 
>>>>>> How exactly does that work?
>>>>> 
>>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>>> LE bit controls instruction endianness, and data endianness depends on
>>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>>> versa with SLE=1 and LE=1.
>>>> 
>>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>> 
>>> Not exactly; instruction endianness depends only on MSR[LE], so
>>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>>> cached version should be host endian always.
>> 
>> I agree. It makes the code flow easier.
> 
> 
> To take into account the host endian order to determine if we should 
> byteswap, we could modify kvmppc_need_byteswap() as follow :
> 
> 
> +/*
> + * Compare endian order of host and guest to determine whether we need
> + * to byteswap or not
> + */
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> -       return vcpu->arch.shared->msr & MSR_LE;
> +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^

mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

> +               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
> }
> 
> 
> 
> and I think MSR[SLE] could be handled this way :
> 
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> @@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>                              u32 *ptr, bool data)
> {
>        int ret;
> +       bool byteswap;
> 
>        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -       if (kvmppc_need_byteswap(vcpu))
> +       byteswap = kvmppc_need_byteswap(vcpu);
> +
> +       /* if we are loading data from a guest which is in Split
> +        * Little Endian mode, the byte order is reversed 

Only for data. Instructions are still non-reverse. You express this well in the code, but not in the comment.

> +        */
> +       if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +               byteswap = !byteswap;
> +
> +       if (byteswap)
>                *ptr = swab32(*ptr);
> 
>        return ret;
> 
> 
> How does that look ? 
> 
> This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Alrighty :)


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-11-05 13:01                     ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2013-11-05 13:01 UTC (permalink / raw)
  To: Cedric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:

> On 11/04/2013 12:44 PM, Alexander Graf wrote:
>> 
>> On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:
>> 
>>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>> 
>>>> 
>>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>>>> 
>>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>> 
>>>>>> 
>>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>>>> 
>>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>>> where instructions and data can have different endianness...
>>>>>> 
>>>>>> How exactly does that work?
>>>>> 
>>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>>> LE bit controls instruction endianness, and data endianness depends on
>>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>>> versa with SLE=1 and LE=1.
>>>> 
>>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>> 
>>> Not exactly; instruction endianness depends only on MSR[LE], so
>>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>>> cached version should be host endian always.
>> 
>> I agree. It makes the code flow easier.
> 
> 
> To take into account the host endian order to determine if we should 
> byteswap, we could modify kvmppc_need_byteswap() as follow :
> 
> 
> +/*
> + * Compare endian order of host and guest to determine whether we need
> + * to byteswap or not
> + */
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> -       return vcpu->arch.shared->msr & MSR_LE;
> +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^

mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

> +               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
> }
> 
> 
> 
> and I think MSR[SLE] could be handled this way :
> 
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> @@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>                              u32 *ptr, bool data)
> {
>        int ret;
> +       bool byteswap;
> 
>        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -       if (kvmppc_need_byteswap(vcpu))
> +       byteswap = kvmppc_need_byteswap(vcpu);
> +
> +       /* if we are loading data from a guest which is in Split
> +        * Little Endian mode, the byte order is reversed 

Only for data. Instructions are still non-reverse. You express this well in the code, but not in the comment.

> +        */
> +       if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +               byteswap = !byteswap;
> +
> +       if (byteswap)
>                *ptr = swab32(*ptr);
> 
>        return ret;
> 
> 
> How does that look ? 
> 
> This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Alrighty :)


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v5 0/6] KVM: PPC: Book3S: MMIO support for Little Endian guests
  2013-11-05 13:01                     ` Alexander Graf
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patches enable the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian and add support for little
endian host and Split Little Endian mode.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

The kvm-ppc-queue branch I am using might be a bit outdated. The HEAD 
is on :

   0c58eb4 KVM: PPC: E500: Add userspace debug stub support


Changes in v5:
 
 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)
 
Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (6):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  powerpc: add Split Little Endian bit to MSR
  KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian

 arch/powerpc/include/asm/kvm_book3s.h   |   43 +++++++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++----
 arch/powerpc/include/asm/reg.h          |    3 +++
 arch/powerpc/kernel/process.c           |    1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |    9 +++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 +++++++++---
 10 files changed, 82 insertions(+), 14 deletions(-)

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v5 0/6] KVM: PPC: Book3S: MMIO support for Little Endian guests
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patches enable the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian and add support for little
endian host and Split Little Endian mode.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

The kvm-ppc-queue branch I am using might be a bit outdated. The HEAD 
is on :

   0c58eb4 KVM: PPC: E500: Add userspace debug stub support


Changes in v5:
 
 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)
 
Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (6):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  powerpc: add Split Little Endian bit to MSR
  KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian

 arch/powerpc/include/asm/kvm_book3s.h   |   43 +++++++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++----
 arch/powerpc/include/asm/reg.h          |    3 +++
 arch/powerpc/kernel/process.c           |    1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |    9 +++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 +++++++++---
 10 files changed, 82 insertions(+), 14 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v5 1/6] KVM: PPC: Book3S: add helper routine to load guest instructions
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

This patch adds an helper routine kvmppc_ld32() to load an
instruction form the guest. This routine will be modified in
the next patch to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/book3s_pr.c          |    2 +-
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..d11c089 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,12 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *ptr, bool data)
+{
+	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +283,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
@@ -294,7 +300,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..ff53031 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * If we fail, we just return to the guest and try executing it again.
 	 */
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) {
-		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+		ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 		if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
 			return RESUME_GUEST;
 		vcpu->arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..01ed005 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+	ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 	if (ret == -ENOENT) {
 		ulong msr = vcpu->arch.shared->msr;
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 1/6] KVM: PPC: Book3S: add helper routine to load guest instructions
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

This patch adds an helper routine kvmppc_ld32() to load an
instruction form the guest. This routine will be modified in
the next patch to take into account the endian order of the
guest.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/book3s_pr.c          |    2 +-
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0ec00f4..d11c089 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,12 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
+			      u32 *ptr, bool data)
+{
+	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -277,7 +283,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
@@ -294,7 +300,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	/* Load the instruction manually if it failed to do so in the
 	 * exit path */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
-		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
+		kvmppc_ld32(vcpu, &pc, &vcpu->arch.last_inst, false);
 
 	return vcpu->arch.last_inst;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 3a89b85..ff53031 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -544,7 +544,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * If we fail, we just return to the guest and try executing it again.
 	 */
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED) {
-		ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+		ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 		if (ret != EMULATE_DONE || last_inst = KVM_INST_FETCH_FAILED)
 			return RESUME_GUEST;
 		vcpu->arch.last_inst = last_inst;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 6075dbd..01ed005 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -600,7 +600,7 @@ static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
+	ret = kvmppc_ld32(vcpu, &srr0, &last_inst, false);
 	if (ret = -ENOENT) {
 		ulong msr = vcpu->arch.shared->msr;
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d11c089..22ec875 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.shared->msr & MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+	return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

They will be used to decide whether to byte-swap or not. When Little
Endian host kernels come, these routines will need to be changed
accordingly.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d11c089..22ec875 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.shared->msr & MSR_LE;
+}
+
+static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
+{
+	return !kvmppc_need_byteswap(vcpu);
+}
+
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest 
and then emulates. If the guest is running in Little Endian mode, 
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
---

Changes in v5:
 
 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
 arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 43 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 22ec875..ac06434 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+	int ret;
+
+	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*ptr = swab32(*ptr);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..89d4fbe 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	andi.	r0, r11, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r4, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r4
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..a942390 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -289,6 +289,15 @@ ld_last_inst:
 #endif
 	stw	r0, SVCPU_LAST_INST(r13)
 
+#ifdef CONFIG_PPC64
+	andi.	r9, r4, MSR_LE
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r9, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r9
+#endif
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest 
and then emulates. If the guest is running in Little Endian mode, 
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
---

Changes in v5:
 
 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
 arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 43 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 22ec875..ac06434 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
 static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
-	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+	int ret;
+
+	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*ptr = swab32(*ptr);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..89d4fbe 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	andi.	r0, r11, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r4, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r4
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..a942390 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -289,6 +289,15 @@ ld_last_inst:
 #endif
 	stw	r0, SVCPU_LAST_INST(r13)
 
+#ifdef CONFIG_PPC64
+	andi.	r9, r4, MSR_LE
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r9, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r9
+#endif
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

If the host is in little endian order, there is no need to byte-swap
in little endian guests.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ac06434..6974aa0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -272,7 +272,11 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
+#ifdef __LITTLE_ENDIAN__
+	return !(vcpu->arch.shared->msr & MSR_LE);
+#else
 	return vcpu->arch.shared->msr & MSR_LE;
+#endif
 }
 
 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

If the host is in little endian order, there is no need to byte-swap
in little endian guests.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ac06434..6974aa0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -272,7 +272,11 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
+#ifdef __LITTLE_ENDIAN__
+	return !(vcpu->arch.shared->msr & MSR_LE);
+#else
 	return vcpu->arch.shared->msr & MSR_LE;
+#endif
 }
 
 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 5/6] powerpc: add Split Little Endian bit to MSR
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater, Benjamin Herrenschmidt

Architecture 2.07 defines a new MSR Split Little Endian (SLE) bit,
which changes the order used for data storage accesses.

If MSR[SLE] is 0, instruction and data storage accesses for the
thread are the same and use the value specified by MSR[LE].

If MSR[SLE] is 1, instruction and data storage accesses for the
thread are opposite. Instruction storage accesses use the value
specified by MSR[LE]. Data storage accesses use the value specified
by ~MSR[LE].

The table below illustrates the Endian modes for all combinations
of MSR[SLE] and MSR[LE].

	SLE	LE	Data	Instruction

	0	0	Big	Big
	0	1	Little	Little
	1	0	Little	Big
	1	1	Big	Little

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

This is clearly a powerpc patch but the following patch is for
kvm-ppc and it depends on it. I am not sure how to handle the
patch serie. Should I send to linuxppc-dev@lists.ozlabs.org also ? 

Thanks,

C.

 arch/powerpc/include/asm/reg.h |    3 +++
 arch/powerpc/kernel/process.c  |    1 +
 2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 5c45787..1464ef9 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -29,6 +29,7 @@
 #define MSR_SF_LG	63              /* Enable 64 bit mode */
 #define MSR_ISF_LG	61              /* Interrupt 64b mode valid on 630 */
 #define MSR_HV_LG 	60              /* Hypervisor state */
+#define MSR_SLE_LG	58		/* Split Little Endian */
 #define MSR_TS_T_LG	34		/* Trans Mem state: Transactional */
 #define MSR_TS_S_LG	33		/* Trans Mem state: Suspended */
 #define MSR_TS_LG	33		/* Trans Mem state (2 bits) */
@@ -68,11 +69,13 @@
 #define MSR_SF		__MASK(MSR_SF_LG)	/* Enable 64 bit mode */
 #define MSR_ISF		__MASK(MSR_ISF_LG)	/* Interrupt 64b mode valid on 630 */
 #define MSR_HV 		__MASK(MSR_HV_LG)	/* Hypervisor state */
+#define MSR_SLE		__MASK(MSR_SLE_LG)	/* Split Little Endian */
 #else
 /* so tests for these bits fail on 32-bit */
 #define MSR_SF		0
 #define MSR_ISF		0
 #define MSR_HV		0
+#define MSR_SLE		0
 #endif
 
 #define MSR_VEC		__MASK(MSR_VEC_LG)	/* Enable AltiVec */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 5c466aa..7f87981 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -799,6 +799,7 @@ static struct regbit {
 #if defined(CONFIG_PPC64) && !defined(CONFIG_BOOKE)
 	{MSR_SF,	"SF"},
 	{MSR_HV,	"HV"},
+	{MSR_SLE,	"SLE"},
 #endif
 	{MSR_VEC,	"VEC"},
 	{MSR_VSX,	"VSX"},
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 5/6] powerpc: add Split Little Endian bit to MSR
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater, Benjamin Herrenschmidt

Architecture 2.07 defines a new MSR Split Little Endian (SLE) bit,
which changes the order used for data storage accesses.

If MSR[SLE] is 0, instruction and data storage accesses for the
thread are the same and use the value specified by MSR[LE].

If MSR[SLE] is 1, instruction and data storage accesses for the
thread are opposite. Instruction storage accesses use the value
specified by MSR[LE]. Data storage accesses use the value specified
by ~MSR[LE].

The table below illustrates the Endian modes for all combinations
of MSR[SLE] and MSR[LE].

	SLE	LE	Data	Instruction

	0	0	Big	Big
	0	1	Little	Little
	1	0	Little	Big
	1	1	Big	Little

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

This is clearly a powerpc patch but the following patch is for
kvm-ppc and it depends on it. I am not sure how to handle the
patch serie. Should I send to linuxppc-dev@lists.ozlabs.org also ? 

Thanks,

C.

 arch/powerpc/include/asm/reg.h |    3 +++
 arch/powerpc/kernel/process.c  |    1 +
 2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 5c45787..1464ef9 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -29,6 +29,7 @@
 #define MSR_SF_LG	63              /* Enable 64 bit mode */
 #define MSR_ISF_LG	61              /* Interrupt 64b mode valid on 630 */
 #define MSR_HV_LG 	60              /* Hypervisor state */
+#define MSR_SLE_LG	58		/* Split Little Endian */
 #define MSR_TS_T_LG	34		/* Trans Mem state: Transactional */
 #define MSR_TS_S_LG	33		/* Trans Mem state: Suspended */
 #define MSR_TS_LG	33		/* Trans Mem state (2 bits) */
@@ -68,11 +69,13 @@
 #define MSR_SF		__MASK(MSR_SF_LG)	/* Enable 64 bit mode */
 #define MSR_ISF		__MASK(MSR_ISF_LG)	/* Interrupt 64b mode valid on 630 */
 #define MSR_HV 		__MASK(MSR_HV_LG)	/* Hypervisor state */
+#define MSR_SLE		__MASK(MSR_SLE_LG)	/* Split Little Endian */
 #else
 /* so tests for these bits fail on 32-bit */
 #define MSR_SF		0
 #define MSR_ISF		0
 #define MSR_HV		0
+#define MSR_SLE		0
 #endif
 
 #define MSR_VEC		__MASK(MSR_VEC_LG)	/* Enable AltiVec */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 5c466aa..7f87981 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -799,6 +799,7 @@ static struct regbit {
 #if defined(CONFIG_PPC64) && !defined(CONFIG_BOOKE)
 	{MSR_SF,	"SF"},
 	{MSR_HV,	"HV"},
+	{MSR_SLE,	"SLE"},
 #endif
 	{MSR_VEC,	"VEC"},
 	{MSR_VSX,	"VSX"},
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 6/6] KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-05 17:22                       ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

Instruction and data storage accesses are done in opposite order
when the Split Little Endian mode is used. This patch modifies
the kvmppc_ld32() routine to reverse the byteswap when the guest
is in SLE mode.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6974aa0..eac8808 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -288,10 +288,22 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
 	int ret;
+	bool byteswap;
 
 	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-	if (kvmppc_need_byteswap(vcpu))
+	byteswap = kvmppc_need_byteswap(vcpu);
+
+	/* When in Split Little Endian (SLE) mode, instruction and
+	 * data storage accesses are done in opposite order. If the
+	 * guest is using this mode, we need to reverse the byteswap
+	 * for data accesses only. Instructions accesses are left
+	 * unchanged.
+	 */
+	if (data && (vcpu->arch.shared->msr & MSR_SLE))
+		byteswap = !byteswap;
+
+	if (byteswap)
 		*ptr = swab32(*ptr);
 
 	return ret;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5 6/6] KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian
@ 2013-11-05 17:22                       ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2013-11-05 17:22 UTC (permalink / raw)
  To: agraf, paulus; +Cc: kvm-ppc, kvm, Cédric Le Goater

Instruction and data storage accesses are done in opposite order
when the Split Little Endian mode is used. This patch modifies
the kvmppc_ld32() routine to reverse the byteswap when the guest
is in SLE mode.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6974aa0..eac8808 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -288,10 +288,22 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
 	int ret;
+	bool byteswap;
 
 	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-	if (kvmppc_need_byteswap(vcpu))
+	byteswap = kvmppc_need_byteswap(vcpu);
+
+	/* When in Split Little Endian (SLE) mode, instruction and
+	 * data storage accesses are done in opposite order. If the
+	 * guest is using this mode, we need to reverse the byteswap
+	 * for data accesses only. Instructions accesses are left
+	 * unchanged.
+	 */
+	if (data && (vcpu->arch.shared->msr & MSR_SLE))
+		byteswap = !byteswap;
+
+	if (byteswap)
 		*ptr = swab32(*ptr);
 
 	return ret;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-11-05 13:01                     ` Alexander Graf
@ 2013-11-06  5:55                       ` Paul Mackerras
  -1 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-11-06  5:55 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cedric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
> 
> On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:
> 
> > +/*
> > + * Compare endian order of host and guest to determine whether we need
> > + * to byteswap or not
> > + */
> > static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> > {
> > -       return vcpu->arch.shared->msr & MSR_LE;
> > +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
> 
> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

Or (MSR_KERNEL & MSR_LE).

> > +       /* if we are loading data from a guest which is in Split
> > +        * Little Endian mode, the byte order is reversed 
> 
> Only for data. Instructions are still non-reverse. You express this well in the code, but not in the comment.

Well, his comment does say "if we are loading data", but I agree it's
slightly ambiguous (the guest's instructions are our data).

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-11-06  5:55                       ` Paul Mackerras
  0 siblings, 0 replies; 94+ messages in thread
From: Paul Mackerras @ 2013-11-06  5:55 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Cedric Le Goater, kvm-ppc, kvm@vger.kernel.org mailing list

On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
> 
> On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:
> 
> > +/*
> > + * Compare endian order of host and guest to determine whether we need
> > + * to byteswap or not
> > + */
> > static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> > {
> > -       return vcpu->arch.shared->msr & MSR_LE;
> > +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
> 
> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

Or (MSR_KERNEL & MSR_LE).

> > +       /* if we are loading data from a guest which is in Split
> > +        * Little Endian mode, the byte order is reversed 
> 
> Only for data. Instructions are still non-reverse. You express this well in the code, but not in the comment.

Well, his comment does say "if we are loading data", but I agree it's
slightly ambiguous (the guest's instructions are our data).

Paul.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-11-06  5:55                       ` Paul Mackerras
@ 2013-11-08 14:29                         ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-11-08 14:29 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: Alexander Graf, kvm-ppc, kvm@vger.kernel.org mailing list

On 11/06/2013 06:55 AM, Paul Mackerras wrote:
> On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
>>
>> On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:
>>
>>> +/*
>>> + * Compare endian order of host and guest to determine whether we need
>>> + * to byteswap or not
>>> + */
>>> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>>> {
>>> -       return vcpu->arch.shared->msr & MSR_LE;
>>> +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
>>
>> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.
> 
> Or (MSR_KERNEL & MSR_LE).

yes. That is better. I will resend the patch with an update. That was 
quite laborious for a single line patch ... 

Thanks,

C.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2013-11-08 14:29                         ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-11-08 14:29 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: Alexander Graf, kvm-ppc, kvm@vger.kernel.org mailing list

On 11/06/2013 06:55 AM, Paul Mackerras wrote:
> On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
>>
>> On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:
>>
>>> +/*
>>> + * Compare endian order of host and guest to determine whether we need
>>> + * to byteswap or not
>>> + */
>>> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>>> {
>>> -       return vcpu->arch.shared->msr & MSR_LE;
>>> +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
>>
>> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.
> 
> Or (MSR_KERNEL & MSR_LE).

yes. That is better. I will resend the patch with an update. That was 
quite laborious for a single line patch ... 

Thanks,

C.


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v5.1 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2013-11-08 14:36                         ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-11-08 14:36 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: agraf, paulus, kvm-ppc, kvm

If the host has the same endian order as the guest, there is no need 
to byte-swap.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ac06434..6974aa0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -272,7 +272,7 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.shared->msr & MSR_LE;
+	return ((vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE));
 }
 
 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v5.1 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
@ 2013-11-08 14:36                         ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2013-11-08 14:36 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: agraf, paulus, kvm-ppc, kvm

If the host has the same endian order as the guest, there is no need 
to byte-swap.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ac06434..6974aa0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -272,7 +272,7 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.shared->msr & MSR_LE;
+	return ((vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE));
 }
 
 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
-- 
1.7.10.4



^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2014-01-02 20:05                         ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:05 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> They will be used to decide whether to byte-swap or not. When Little
> Endian host kernels come, these routines will need to be changed
> accordingly.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
> 1 file changed, 10 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index d11c089..22ec875 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 	return vcpu->arch.pc;
> }
> 
> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.shared->msr & MSR_LE;
> +}
> +
> +static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> +{
> +	return !kvmppc_need_byteswap(vcpu);

This is logically reversed. kvmppc_need_byteswap should check kvmppc_is_bigendian(), not the other way around.


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian
@ 2014-01-02 20:05                         ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:05 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> They will be used to decide whether to byte-swap or not. When Little
> Endian host kernels come, these routines will need to be changed
> accordingly.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
> 1 file changed, 10 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index d11c089..22ec875 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 	return vcpu->arch.pc;
> }
> 
> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.shared->msr & MSR_LE;
> +}
> +
> +static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> +{
> +	return !kvmppc_need_byteswap(vcpu);

This is logically reversed. kvmppc_need_byteswap should check kvmppc_is_bigendian(), not the other way around.


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2014-01-02 20:22                         ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:22 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest 
> and then emulates. If the guest is running in Little Endian mode, 
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> 
> Changes in v5:
> 
> - changed register usage slightly (paulus@samba.org)
> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
> arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 43 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 22ec875..ac06434 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +	int ret;
> +
> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*ptr = swab32(*ptr);
> +
> +	return ret;
> }
> 
> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index b15554a..3769a13 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
> 
> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                              unsigned int rt, unsigned int bytes,
> -                              int is_bigendian);
> +			      unsigned int rt, unsigned int bytes,
> +			      int not_reverse);
> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               unsigned int rt, unsigned int bytes,
> -                               int is_bigendian);
> +			       unsigned int rt, unsigned int bytes,
> +			       int not_reverse);
> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               u64 val, unsigned int bytes, int is_bigendian);
> +			       u64 val, unsigned int bytes, int not_reverse);
> 
> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>                                       struct kvm_vcpu *vcpu);
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index 77f1baa..89d4fbe 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
> 	lwz	r8, 0(r10)
> 	mtmsrd	r3
> 
> +	andi.	r0, r11, MSR_LE
> +
> 	/* Store the result */
> 	stw	r8, VCPU_LAST_INST(r9)
> 
> +	beq	after_inst_store
> +
> +	/* Swap and store the result */
> +	addi	r4, r9, VCPU_LAST_INST
> +	stwbrx	r8, 0, r4
> +

On v4 Paul mentioned that it would be dramatically more simple to load last_inst with host endianness and do any required fixups in kvmppc_get_last_inst() and I tend to agree. That also renders patch 1/6 moot, as you would simply always have a variable with the last_inst in host endianness and swap it regardless.

Sorry to make you jump through so many iterations, but getting this right is incredibly hard.

Please rework the patches to not require any asm changes.

> 	/* Unset guest mode. */
> +after_inst_store:
> 	li	r0, KVM_GUEST_MODE_HOST_HV
> 	stb	r0, HSTATE_IN_GUEST(r13)
> 	b	guest_exit_cont
> diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
> index 1abe478..a942390 100644
> --- a/arch/powerpc/kvm/book3s_segment.S
> +++ b/arch/powerpc/kvm/book3s_segment.S
> @@ -289,6 +289,15 @@ ld_last_inst:
> #endif
> 	stw	r0, SVCPU_LAST_INST(r13)
> 
> +#ifdef CONFIG_PPC64
> +	andi.	r9, r4, MSR_LE
> +	beq	no_ld_last_inst
> +
> +	/* swap and store the result */
> +	addi	r9, r13, SVCPU_LAST_INST
> +	stwbrx	r0, 0, r9
> +#endif
> +
> no_ld_last_inst:
> 
> 	/* Unset guest mode */
> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
> index 751cd45..5e38004 100644
> --- a/arch/powerpc/kvm/emulate.c
> +++ b/arch/powerpc/kvm/emulate.c
> @@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>  * lmw
>  * stmw
>  *
> - * XXX is_bigendian should depend on MMU mapping or MSR[LE]
>  */
> /* XXX Should probably auto-generate instruction decoding for a particular core
>  * from opcode tables in the future. */
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 07c0106..6950f2b 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> }
> 
> int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                       unsigned int rt, unsigned int bytes, int is_bigendian)
> +			unsigned int rt, unsigned int bytes, int not_reverse)

I'm not really happy with the "not_reverse" name. In the scope of this patch it's reasonably obvious what it tries to describe, but consider someone looking at this function without a clue where we're swizzling endianness. The name doesn't even mention endianness.

Naming is really hard.

How does "is_default_endian" sound? Then you can change the code below ...

> {
> 	int idx, ret;
> +	int is_bigendian = not_reverse;
> +
> +	if (!kvmppc_is_bigendian(vcpu))
> +		is_bigendian = !not_reverse;

... to

if (kvmppc_is_bigendian(vcpu)) {
    /* Default endianness is "big endian". */
    is_bigendian = is_default_endian;
} else {
    /* Default endianness is "little endian". */
    is_bigendian = !is_default_endian;
}

and suddenly things become reasonably clear for everyone I'd hope.


Alex

> 
> 	if (bytes > sizeof(run->mmio.data)) {
> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
> @@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> 
> /* Same as above, but sign extends */
> int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                        unsigned int rt, unsigned int bytes, int is_bigendian)
> +			unsigned int rt, unsigned int bytes, int not_reverse)
> {
> 	int r;
> 
> 	vcpu->arch.mmio_sign_extend = 1;
> -	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
> +	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
> 
> 	return r;
> }
> 
> int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                        u64 val, unsigned int bytes, int is_bigendian)
> +			u64 val, unsigned int bytes, int not_reverse)
> {
> 	void *data = run->mmio.data;
> 	int idx, ret;
> +	int is_bigendian = not_reverse;
> +
> +	if (!kvmppc_is_bigendian(vcpu))
> +		is_bigendian = !not_reverse;
> 
> 	if (bytes > sizeof(run->mmio.data)) {
> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-02 20:22                         ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:22 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest 
> and then emulates. If the guest is running in Little Endian mode, 
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> 
> Changes in v5:
> 
> - changed register usage slightly (paulus@samba.org)
> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
> arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 43 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 22ec875..ac06434 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +	int ret;
> +
> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*ptr = swab32(*ptr);
> +
> +	return ret;
> }
> 
> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index b15554a..3769a13 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
> 
> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                              unsigned int rt, unsigned int bytes,
> -                              int is_bigendian);
> +			      unsigned int rt, unsigned int bytes,
> +			      int not_reverse);
> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               unsigned int rt, unsigned int bytes,
> -                               int is_bigendian);
> +			       unsigned int rt, unsigned int bytes,
> +			       int not_reverse);
> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               u64 val, unsigned int bytes, int is_bigendian);
> +			       u64 val, unsigned int bytes, int not_reverse);
> 
> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>                                       struct kvm_vcpu *vcpu);
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index 77f1baa..89d4fbe 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
> 	lwz	r8, 0(r10)
> 	mtmsrd	r3
> 
> +	andi.	r0, r11, MSR_LE
> +
> 	/* Store the result */
> 	stw	r8, VCPU_LAST_INST(r9)
> 
> +	beq	after_inst_store
> +
> +	/* Swap and store the result */
> +	addi	r4, r9, VCPU_LAST_INST
> +	stwbrx	r8, 0, r4
> +

On v4 Paul mentioned that it would be dramatically more simple to load last_inst with host endianness and do any required fixups in kvmppc_get_last_inst() and I tend to agree. That also renders patch 1/6 moot, as you would simply always have a variable with the last_inst in host endianness and swap it regardless.

Sorry to make you jump through so many iterations, but getting this right is incredibly hard.

Please rework the patches to not require any asm changes.

> 	/* Unset guest mode. */
> +after_inst_store:
> 	li	r0, KVM_GUEST_MODE_HOST_HV
> 	stb	r0, HSTATE_IN_GUEST(r13)
> 	b	guest_exit_cont
> diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
> index 1abe478..a942390 100644
> --- a/arch/powerpc/kvm/book3s_segment.S
> +++ b/arch/powerpc/kvm/book3s_segment.S
> @@ -289,6 +289,15 @@ ld_last_inst:
> #endif
> 	stw	r0, SVCPU_LAST_INST(r13)
> 
> +#ifdef CONFIG_PPC64
> +	andi.	r9, r4, MSR_LE
> +	beq	no_ld_last_inst
> +
> +	/* swap and store the result */
> +	addi	r9, r13, SVCPU_LAST_INST
> +	stwbrx	r0, 0, r9
> +#endif
> +
> no_ld_last_inst:
> 
> 	/* Unset guest mode */
> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
> index 751cd45..5e38004 100644
> --- a/arch/powerpc/kvm/emulate.c
> +++ b/arch/powerpc/kvm/emulate.c
> @@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>  * lmw
>  * stmw
>  *
> - * XXX is_bigendian should depend on MMU mapping or MSR[LE]
>  */
> /* XXX Should probably auto-generate instruction decoding for a particular core
>  * from opcode tables in the future. */
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 07c0106..6950f2b 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> }
> 
> int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                       unsigned int rt, unsigned int bytes, int is_bigendian)
> +			unsigned int rt, unsigned int bytes, int not_reverse)

I'm not really happy with the "not_reverse" name. In the scope of this patch it's reasonably obvious what it tries to describe, but consider someone looking at this function without a clue where we're swizzling endianness. The name doesn't even mention endianness.

Naming is really hard.

How does "is_default_endian" sound? Then you can change the code below ...

> {
> 	int idx, ret;
> +	int is_bigendian = not_reverse;
> +
> +	if (!kvmppc_is_bigendian(vcpu))
> +		is_bigendian = !not_reverse;

... to

if (kvmppc_is_bigendian(vcpu)) {
    /* Default endianness is "big endian". */
    is_bigendian = is_default_endian;
} else {
    /* Default endianness is "little endian". */
    is_bigendian = !is_default_endian;
}

and suddenly things become reasonably clear for everyone I'd hope.


Alex

> 
> 	if (bytes > sizeof(run->mmio.data)) {
> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
> @@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> 
> /* Same as above, but sign extends */
> int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                        unsigned int rt, unsigned int bytes, int is_bigendian)
> +			unsigned int rt, unsigned int bytes, int not_reverse)
> {
> 	int r;
> 
> 	vcpu->arch.mmio_sign_extend = 1;
> -	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
> +	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
> 
> 	return r;
> }
> 
> int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                        u64 val, unsigned int bytes, int is_bigendian)
> +			u64 val, unsigned int bytes, int not_reverse)
> {
> 	void *data = run->mmio.data;
> 	int idx, ret;
> +	int is_bigendian = not_reverse;
> +
> +	if (!kvmppc_is_bigendian(vcpu))
> +		is_bigendian = !not_reverse;
> 
> 	if (bytes > sizeof(run->mmio.data)) {
> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
> -- 
> 1.7.10.4
> 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2014-01-02 20:25                         ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:25 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> If the host is in little endian order, there is no need to byte-swap
> in little endian guests.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
> 1 file changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index ac06434..6974aa0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -272,7 +272,11 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> +#ifdef __LITTLE_ENDIAN__
> +	return !(vcpu->arch.shared->msr & MSR_LE);
> +#else
> 	return vcpu->arch.shared->msr & MSR_LE;
> +#endif
> }
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)

... and suddenly is_bigendian() becomes true for little endian guests on little endian hosts?


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
@ 2014-01-02 20:25                         ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:25 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> If the host is in little endian order, there is no need to byte-swap
> in little endian guests.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
> 1 file changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index ac06434..6974aa0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -272,7 +272,11 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> +#ifdef __LITTLE_ENDIAN__
> +	return !(vcpu->arch.shared->msr & MSR_LE);
> +#else
> 	return vcpu->arch.shared->msr & MSR_LE;
> +#endif
> }
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)

... and suddenly is_bigendian() becomes true for little endian guests on little endian hosts?


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 6/6] KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian
  2013-11-05 17:22                       ` Cédric Le Goater
@ 2014-01-02 20:26                         ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:26 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> Instruction and data storage accesses are done in opposite order
> when the Split Little Endian mode is used. This patch modifies
> the kvmppc_ld32() routine to reverse the byteswap when the guest
> is in SLE mode.

SLE can also happen with MMIO. This needs a more global approach I'm afraid.


Alex

> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |   14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 6974aa0..eac8808 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -288,10 +288,22 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> 	int ret;
> +	bool byteswap;
> 
> 	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -	if (kvmppc_need_byteswap(vcpu))
> +	byteswap = kvmppc_need_byteswap(vcpu);
> +
> +	/* When in Split Little Endian (SLE) mode, instruction and
> +	 * data storage accesses are done in opposite order. If the
> +	 * guest is using this mode, we need to reverse the byteswap
> +	 * for data accesses only. Instructions accesses are left
> +	 * unchanged.
> +	 */
> +	if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +		byteswap = !byteswap;
> +
> +	if (byteswap)
> 		*ptr = swab32(*ptr);
> 
> 	return ret;
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 6/6] KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian
@ 2014-01-02 20:26                         ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:26 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> Instruction and data storage accesses are done in opposite order
> when the Split Little Endian mode is used. This patch modifies
> the kvmppc_ld32() routine to reverse the byteswap when the guest
> is in SLE mode.

SLE can also happen with MMIO. This needs a more global approach I'm afraid.


Alex

> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |   14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 6974aa0..eac8808 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -288,10 +288,22 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> 	int ret;
> +	bool byteswap;
> 
> 	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -	if (kvmppc_need_byteswap(vcpu))
> +	byteswap = kvmppc_need_byteswap(vcpu);
> +
> +	/* When in Split Little Endian (SLE) mode, instruction and
> +	 * data storage accesses are done in opposite order. If the
> +	 * guest is using this mode, we need to reverse the byteswap
> +	 * for data accesses only. Instructions accesses are left
> +	 * unchanged.
> +	 */
> +	if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +		byteswap = !byteswap;
> +
> +	if (byteswap)
> 		*ptr = swab32(*ptr);
> 
> 	return ret;
> -- 
> 1.7.10.4
> 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5.1 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  2013-11-08 14:36                         ` Cedric Le Goater
@ 2014-01-02 20:28                           ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:28 UTC (permalink / raw)
  To: Cedric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 08.11.2013, at 15:36, Cedric Le Goater <clg@fr.ibm.com> wrote:

> If the host has the same endian order as the guest, there is no need 
> to byte-swap.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
> 1 file changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index ac06434..6974aa0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -272,7 +272,7 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> -	return vcpu->arch.shared->msr & MSR_LE;
> +	return ((vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE));

Ah, I like that one :). However kvmppc_is_bigendian() is still broken now, no?


Alex

> }
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> -- 
> 1.7.10.4
> 
> 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5.1 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
@ 2014-01-02 20:28                           ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-02 20:28 UTC (permalink / raw)
  To: Cedric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 08.11.2013, at 15:36, Cedric Le Goater <clg@fr.ibm.com> wrote:

> If the host has the same endian order as the guest, there is no need 
> to byte-swap.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |    4 ++++
> 1 file changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index ac06434..6974aa0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -272,7 +272,7 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> -	return vcpu->arch.shared->msr & MSR_LE;
> +	return ((vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE));

Ah, I like that one :). However kvmppc_is_bigendian() is still broken now, no?


Alex

> }
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> -- 
> 1.7.10.4
> 
> 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian
  2014-01-02 20:05                         ` Alexander Graf
@ 2014-01-08 17:22                           ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-08 17:22 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

Hi Alex,

On 01/02/2014 09:05 PM, Alexander Graf wrote:
> 
> On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> They will be used to decide whether to byte-swap or not. When Little
>> Endian host kernels come, these routines will need to be changed
>> accordingly.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>> arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
>> 1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index d11c089..22ec875 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
>> 	return vcpu->arch.pc;
>> }
>>
>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>> +{
>> +	return vcpu->arch.shared->msr & MSR_LE;
>> +}
>> +
>> +static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>> +{
>> +	return !kvmppc_need_byteswap(vcpu);
> 
> This is logically reversed. kvmppc_need_byteswap should check kvmppc_is_bigendian(), 
> not the other way around.
> 
 
I think we should get rid of kvmppc_is_bigendian(). 

As you noted in a subsequent email, it ends up returning true when run 
for "little endian guests on little endian hosts", which is awkward and 
the way it is used in kvmppc_handle_load() and kvmppc_handle_store()
can be improved.

I will give it a try taking into account the other comments you made. 

C.


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian
@ 2014-01-08 17:22                           ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-08 17:22 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

Hi Alex,

On 01/02/2014 09:05 PM, Alexander Graf wrote:
> 
> On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> They will be used to decide whether to byte-swap or not. When Little
>> Endian host kernels come, these routines will need to be changed
>> accordingly.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>> arch/powerpc/include/asm/kvm_book3s.h |   10 ++++++++++
>> 1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index d11c089..22ec875 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -270,6 +270,16 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
>> 	return vcpu->arch.pc;
>> }
>>
>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>> +{
>> +	return vcpu->arch.shared->msr & MSR_LE;
>> +}
>> +
>> +static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>> +{
>> +	return !kvmppc_need_byteswap(vcpu);
> 
> This is logically reversed. kvmppc_need_byteswap should check kvmppc_is_bigendian(), 
> not the other way around.
> 
 
I think we should get rid of kvmppc_is_bigendian(). 

As you noted in a subsequent email, it ends up returning true when run 
for "little endian guests on little endian hosts", which is awkward and 
the way it is used in kvmppc_handle_load() and kvmppc_handle_store()
can be improved.

I will give it a try taking into account the other comments you made. 

C.


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-02 20:22                         ` Alexander Graf
@ 2014-01-08 17:23                           ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-08 17:23 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/02/2014 09:22 PM, Alexander Graf wrote:
> 
> On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest 
>> and then emulates. If the guest is running in Little Endian mode, 
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>> ---
>>
>> Changes in v5:
>>
>> - changed register usage slightly (paulus@samba.org)
>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
>> arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 43 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 22ec875..ac06434 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *ptr, bool data)
>> {
>> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*ptr = swab32(*ptr);
>> +
>> +	return ret;
>> }
>>
>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>> index b15554a..3769a13 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
>>
>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                              unsigned int rt, unsigned int bytes,
>> -                              int is_bigendian);
>> +			      unsigned int rt, unsigned int bytes,
>> +			      int not_reverse);
>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               unsigned int rt, unsigned int bytes,
>> -                               int is_bigendian);
>> +			       unsigned int rt, unsigned int bytes,
>> +			       int not_reverse);
>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               u64 val, unsigned int bytes, int is_bigendian);
>> +			       u64 val, unsigned int bytes, int not_reverse);
>>
>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>                                       struct kvm_vcpu *vcpu);
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> index 77f1baa..89d4fbe 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
>> 	lwz	r8, 0(r10)
>> 	mtmsrd	r3
>>
>> +	andi.	r0, r11, MSR_LE
>> +
>> 	/* Store the result */
>> 	stw	r8, VCPU_LAST_INST(r9)
>>
>> +	beq	after_inst_store
>> +
>> +	/* Swap and store the result */
>> +	addi	r4, r9, VCPU_LAST_INST
>> +	stwbrx	r8, 0, r4
>> +
> 
> On v4 Paul mentioned that it would be dramatically more simple to load 
> last_inst with host endianness and do any required fixups in 
> kvmppc_get_last_inst() and I tend to agree. 

Hmm, I am confused ... This is what the above code is doing : loading the 
guest last_inst with host endianness. Anyhow, I think I get what you mean.

> That also renders patch 1/6 moot, as you would simply always have a 
> variable with the last_inst in host endianness and swap it regardless.
>
> Sorry to make you jump through so many iterations, but getting this 
> right is incredibly hard.

It's ok. We are exploring alternatives. I rather talk about it and get 
this done.

> Please rework the patches to not require any asm changes.

ok. I will send a patch without the SLE support for which I think I don't 
fully understand the consequences.

>> 	/* Unset guest mode. */
>> +after_inst_store:
>> 	li	r0, KVM_GUEST_MODE_HOST_HV
>> 	stb	r0, HSTATE_IN_GUEST(r13)
>> 	b	guest_exit_cont
>> diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
>> index 1abe478..a942390 100644
>> --- a/arch/powerpc/kvm/book3s_segment.S
>> +++ b/arch/powerpc/kvm/book3s_segment.S
>> @@ -289,6 +289,15 @@ ld_last_inst:
>> #endif
>> 	stw	r0, SVCPU_LAST_INST(r13)
>>
>> +#ifdef CONFIG_PPC64
>> +	andi.	r9, r4, MSR_LE
>> +	beq	no_ld_last_inst
>> +
>> +	/* swap and store the result */
>> +	addi	r9, r13, SVCPU_LAST_INST
>> +	stwbrx	r0, 0, r9
>> +#endif
>> +
>> no_ld_last_inst:
>>
>> 	/* Unset guest mode */
>> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
>> index 751cd45..5e38004 100644
>> --- a/arch/powerpc/kvm/emulate.c
>> +++ b/arch/powerpc/kvm/emulate.c
>> @@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>>  * lmw
>>  * stmw
>>  *
>> - * XXX is_bigendian should depend on MMU mapping or MSR[LE]
>>  */
>> /* XXX Should probably auto-generate instruction decoding for a particular core
>>  * from opcode tables in the future. */
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 07c0106..6950f2b 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>> }
>>
>> int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                       unsigned int rt, unsigned int bytes, int is_bigendian)
>> +			unsigned int rt, unsigned int bytes, int not_reverse)
> 
> I'm not really happy with the "not_reverse" name. In the scope of this patch it's 
> reasonably obvious what it tries to describe, but consider someone looking at this 
> function without a clue where we're swizzling endianness. The name doesn't even mention 
> endianness.
> 
> Naming is really hard.

yes. we should leave 'is_bigendian'. 

> How does "is_default_endian" sound? Then you can change the code below ...
>
>> {
>> 	int idx, ret;
>> +	int is_bigendian = not_reverse;
>> +
>> +	if (!kvmppc_is_bigendian(vcpu))
>> +		is_bigendian = !not_reverse;
> 
> ... to
> 
> if (kvmppc_is_bigendian(vcpu)) {
>     /* Default endianness is "big endian". */
>     is_bigendian = is_default_endian;
> } else {
>     /* Default endianness is "little endian". */
>     is_bigendian = !is_default_endian;
> }
> 
> and suddenly things become reasonably clear for everyone I'd hope.

I think something like :

+	if (kvmppc_need_byteswap(vcpu))
+		is_bigendian = !is_bigendian;
+

has a small footprint and is clear enough ? 

Thanks for the inputs, a (single) patch follows 

C.

> Alex
> 
>>
>> 	if (bytes > sizeof(run->mmio.data)) {
>> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
>> @@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>
>> /* Same as above, but sign extends */
>> int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                        unsigned int rt, unsigned int bytes, int is_bigendian)
>> +			unsigned int rt, unsigned int bytes, int not_reverse)
>> {
>> 	int r;
>>
>> 	vcpu->arch.mmio_sign_extend = 1;
>> -	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
>> +	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
>>
>> 	return r;
>> }
>>
>> int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                        u64 val, unsigned int bytes, int is_bigendian)
>> +			u64 val, unsigned int bytes, int not_reverse)
>> {
>> 	void *data = run->mmio.data;
>> 	int idx, ret;
>> +	int is_bigendian = not_reverse;
>> +
>> +	if (!kvmppc_is_bigendian(vcpu))
>> +		is_bigendian = !not_reverse;
>>
>> 	if (bytes > sizeof(run->mmio.data)) {
>> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
>> -- 
>> 1.7.10.4
>>
> 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-08 17:23                           ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-08 17:23 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/02/2014 09:22 PM, Alexander Graf wrote:
> 
> On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest 
>> and then emulates. If the guest is running in Little Endian mode, 
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>> ---
>>
>> Changes in v5:
>>
>> - changed register usage slightly (paulus@samba.org)
>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
>> arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 43 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 22ec875..ac06434 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *ptr, bool data)
>> {
>> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*ptr = swab32(*ptr);
>> +
>> +	return ret;
>> }
>>
>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>> index b15554a..3769a13 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
>>
>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                              unsigned int rt, unsigned int bytes,
>> -                              int is_bigendian);
>> +			      unsigned int rt, unsigned int bytes,
>> +			      int not_reverse);
>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               unsigned int rt, unsigned int bytes,
>> -                               int is_bigendian);
>> +			       unsigned int rt, unsigned int bytes,
>> +			       int not_reverse);
>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               u64 val, unsigned int bytes, int is_bigendian);
>> +			       u64 val, unsigned int bytes, int not_reverse);
>>
>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>                                       struct kvm_vcpu *vcpu);
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> index 77f1baa..89d4fbe 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
>> 	lwz	r8, 0(r10)
>> 	mtmsrd	r3
>>
>> +	andi.	r0, r11, MSR_LE
>> +
>> 	/* Store the result */
>> 	stw	r8, VCPU_LAST_INST(r9)
>>
>> +	beq	after_inst_store
>> +
>> +	/* Swap and store the result */
>> +	addi	r4, r9, VCPU_LAST_INST
>> +	stwbrx	r8, 0, r4
>> +
> 
> On v4 Paul mentioned that it would be dramatically more simple to load 
> last_inst with host endianness and do any required fixups in 
> kvmppc_get_last_inst() and I tend to agree. 

Hmm, I am confused ... This is what the above code is doing : loading the 
guest last_inst with host endianness. Anyhow, I think I get what you mean.

> That also renders patch 1/6 moot, as you would simply always have a 
> variable with the last_inst in host endianness and swap it regardless.
>
> Sorry to make you jump through so many iterations, but getting this 
> right is incredibly hard.

It's ok. We are exploring alternatives. I rather talk about it and get 
this done.

> Please rework the patches to not require any asm changes.

ok. I will send a patch without the SLE support for which I think I don't 
fully understand the consequences.

>> 	/* Unset guest mode. */
>> +after_inst_store:
>> 	li	r0, KVM_GUEST_MODE_HOST_HV
>> 	stb	r0, HSTATE_IN_GUEST(r13)
>> 	b	guest_exit_cont
>> diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
>> index 1abe478..a942390 100644
>> --- a/arch/powerpc/kvm/book3s_segment.S
>> +++ b/arch/powerpc/kvm/book3s_segment.S
>> @@ -289,6 +289,15 @@ ld_last_inst:
>> #endif
>> 	stw	r0, SVCPU_LAST_INST(r13)
>>
>> +#ifdef CONFIG_PPC64
>> +	andi.	r9, r4, MSR_LE
>> +	beq	no_ld_last_inst
>> +
>> +	/* swap and store the result */
>> +	addi	r9, r13, SVCPU_LAST_INST
>> +	stwbrx	r0, 0, r9
>> +#endif
>> +
>> no_ld_last_inst:
>>
>> 	/* Unset guest mode */
>> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
>> index 751cd45..5e38004 100644
>> --- a/arch/powerpc/kvm/emulate.c
>> +++ b/arch/powerpc/kvm/emulate.c
>> @@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>>  * lmw
>>  * stmw
>>  *
>> - * XXX is_bigendian should depend on MMU mapping or MSR[LE]
>>  */
>> /* XXX Should probably auto-generate instruction decoding for a particular core
>>  * from opcode tables in the future. */
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 07c0106..6950f2b 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>> }
>>
>> int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                       unsigned int rt, unsigned int bytes, int is_bigendian)
>> +			unsigned int rt, unsigned int bytes, int not_reverse)
> 
> I'm not really happy with the "not_reverse" name. In the scope of this patch it's 
> reasonably obvious what it tries to describe, but consider someone looking at this 
> function without a clue where we're swizzling endianness. The name doesn't even mention 
> endianness.
> 
> Naming is really hard.

yes. we should leave 'is_bigendian'. 

> How does "is_default_endian" sound? Then you can change the code below ...
>
>> {
>> 	int idx, ret;
>> +	int is_bigendian = not_reverse;
>> +
>> +	if (!kvmppc_is_bigendian(vcpu))
>> +		is_bigendian = !not_reverse;
> 
> ... to
> 
> if (kvmppc_is_bigendian(vcpu)) {
>     /* Default endianness is "big endian". */
>     is_bigendian = is_default_endian;
> } else {
>     /* Default endianness is "little endian". */
>     is_bigendian = !is_default_endian;
> }
> 
> and suddenly things become reasonably clear for everyone I'd hope.

I think something like :

+	if (kvmppc_need_byteswap(vcpu))
+		is_bigendian = !is_bigendian;
+

has a small footprint and is clear enough ? 

Thanks for the inputs, a (single) patch follows 

C.

> Alex
> 
>>
>> 	if (bytes > sizeof(run->mmio.data)) {
>> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
>> @@ -662,21 +666,25 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>
>> /* Same as above, but sign extends */
>> int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                        unsigned int rt, unsigned int bytes, int is_bigendian)
>> +			unsigned int rt, unsigned int bytes, int not_reverse)
>> {
>> 	int r;
>>
>> 	vcpu->arch.mmio_sign_extend = 1;
>> -	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
>> +	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
>>
>> 	return r;
>> }
>>
>> int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                        u64 val, unsigned int bytes, int is_bigendian)
>> +			u64 val, unsigned int bytes, int not_reverse)
>> {
>> 	void *data = run->mmio.data;
>> 	int idx, ret;
>> +	int is_bigendian = not_reverse;
>> +
>> +	if (!kvmppc_is_bigendian(vcpu))
>> +		is_bigendian = !not_reverse;
>>
>> 	if (bytes > sizeof(run->mmio.data)) {
>> 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
>> -- 
>> 1.7.10.4
>>
> 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-08 17:23                           ` Cedric Le Goater
@ 2014-01-08 17:34                             ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-08 17:34 UTC (permalink / raw)
  To: Cedric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/08/2014 06:23 PM, Cedric Le Goater wrote:
> On 01/02/2014 09:22 PM, Alexander Graf wrote:
>> On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:
>>
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>>
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>>
>>> Finally, the meaning of the 'is_bigendian' argument of the
>>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>>> slightly changed to represent an eventual reverse operation. This
>>> is used in conjunction with kvmppc_is_bigendian() to determine if
>>> the instruction being emulated should be byte-swapped.
>>>
>>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>>> ---
>>>
>>> Changes in v5:
>>>
>>> - changed register usage slightly (paulus@samba.org)
>>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>>>
>>> Changes in v3:
>>>
>>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>>    kvmppc_ld_inst(). (Alexander Graf)
>>>
>>> Changes in v2:
>>>
>>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>>    exit paths. (Paul Mackerras)
>>>
>>> - moved the byte swapping logic to kvmppc_handle_load() and
>>>    kvmppc_handle_load() by changing the is_bigendian parameter
>>>    meaning. (Paul Mackerras)
>>>
>>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
>>> arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
>>> arch/powerpc/kvm/emulate.c              |    1 -
>>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>>> 6 files changed, 43 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 22ec875..ac06434 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>>> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>>> 			      u32 *ptr, bool data)
>>> {
>>> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>>> +	int ret;
>>> +
>>> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>>> +
>>> +	if (kvmppc_need_byteswap(vcpu))
>>> +		*ptr = swab32(*ptr);
>>> +
>>> +	return ret;
>>> }
>>>
>>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>>> index b15554a..3769a13 100644
>>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>>> @@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
>>>
>>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                              unsigned int rt, unsigned int bytes,
>>> -                              int is_bigendian);
>>> +			      unsigned int rt, unsigned int bytes,
>>> +			      int not_reverse);
>>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                               unsigned int rt, unsigned int bytes,
>>> -                               int is_bigendian);
>>> +			       unsigned int rt, unsigned int bytes,
>>> +			       int not_reverse);
>>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                               u64 val, unsigned int bytes, int is_bigendian);
>>> +			       u64 val, unsigned int bytes, int not_reverse);
>>>
>>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>>                                        struct kvm_vcpu *vcpu);
>>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> index 77f1baa..89d4fbe 100644
>>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> @@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
>>> 	lwz	r8, 0(r10)
>>> 	mtmsrd	r3
>>>
>>> +	andi.	r0, r11, MSR_LE
>>> +
>>> 	/* Store the result */
>>> 	stw	r8, VCPU_LAST_INST(r9)
>>>
>>> +	beq	after_inst_store
>>> +
>>> +	/* Swap and store the result */
>>> +	addi	r4, r9, VCPU_LAST_INST
>>> +	stwbrx	r8, 0, r4
>>> +
>> On v4 Paul mentioned that it would be dramatically more simple to load
>> last_inst with host endianness and do any required fixups in
>> kvmppc_get_last_inst() and I tend to agree.
> Hmm, I am confused ... This is what the above code is doing : loading the
> guest last_inst with host endianness. Anyhow, I think I get what you mean.
>
>> That also renders patch 1/6 moot, as you would simply always have a
>> variable with the last_inst in host endianness and swap it regardless.
>>
>> Sorry to make you jump through so many iterations, but getting this
>> right is incredibly hard.
> It's ok. We are exploring alternatives. I rather talk about it and get
> this done.
>
>> Please rework the patches to not require any asm changes.
> ok. I will send a patch without the SLE support for which I think I don't
> fully understand the consequences.
>
>>> 	/* Unset guest mode. */
>>> +after_inst_store:
>>> 	li	r0, KVM_GUEST_MODE_HOST_HV
>>> 	stb	r0, HSTATE_IN_GUEST(r13)
>>> 	b	guest_exit_cont
>>> diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
>>> index 1abe478..a942390 100644
>>> --- a/arch/powerpc/kvm/book3s_segment.S
>>> +++ b/arch/powerpc/kvm/book3s_segment.S
>>> @@ -289,6 +289,15 @@ ld_last_inst:
>>> #endif
>>> 	stw	r0, SVCPU_LAST_INST(r13)
>>>
>>> +#ifdef CONFIG_PPC64
>>> +	andi.	r9, r4, MSR_LE
>>> +	beq	no_ld_last_inst
>>> +
>>> +	/* swap and store the result */
>>> +	addi	r9, r13, SVCPU_LAST_INST
>>> +	stwbrx	r0, 0, r9
>>> +#endif
>>> +
>>> no_ld_last_inst:
>>>
>>> 	/* Unset guest mode */
>>> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
>>> index 751cd45..5e38004 100644
>>> --- a/arch/powerpc/kvm/emulate.c
>>> +++ b/arch/powerpc/kvm/emulate.c
>>> @@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>>>   * lmw
>>>   * stmw
>>>   *
>>> - * XXX is_bigendian should depend on MMU mapping or MSR[LE]
>>>   */
>>> /* XXX Should probably auto-generate instruction decoding for a particular core
>>>   * from opcode tables in the future. */
>>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>>> index 07c0106..6950f2b 100644
>>> --- a/arch/powerpc/kvm/powerpc.c
>>> +++ b/arch/powerpc/kvm/powerpc.c
>>> @@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>>> }
>>>
>>> int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                       unsigned int rt, unsigned int bytes, int is_bigendian)
>>> +			unsigned int rt, unsigned int bytes, int not_reverse)
>> I'm not really happy with the "not_reverse" name. In the scope of this patch it's
>> reasonably obvious what it tries to describe, but consider someone looking at this
>> function without a clue where we're swizzling endianness. The name doesn't even mention
>> endianness.
>>
>> Naming is really hard.
> yes. we should leave 'is_bigendian'.
>
>> How does "is_default_endian" sound? Then you can change the code below ...
>>
>>> {
>>> 	int idx, ret;
>>> +	int is_bigendian = not_reverse;
>>> +
>>> +	if (!kvmppc_is_bigendian(vcpu))
>>> +		is_bigendian = !not_reverse;
>> ... to
>>
>> if (kvmppc_is_bigendian(vcpu)) {
>>      /* Default endianness is "big endian". */
>>      is_bigendian = is_default_endian;
>> } else {
>>      /* Default endianness is "little endian". */
>>      is_bigendian = !is_default_endian;
>> }
>>
>> and suddenly things become reasonably clear for everyone I'd hope.
> I think something like :
>
> +	if (kvmppc_need_byteswap(vcpu))
> +		is_bigendian = !is_bigendian;
> +
>
> has a small footprint and is clear enough ?
>
> Thanks for the inputs, a (single) patch follows

Not really. The argument means "use the normal endianness you would 
usually use for memory access". It doesn't mean little or big endian 
yet, as that's what we determine later.

Keep in mind that gcc is really good at optimizing code like this, so 
please don't try to be smart with variable reusage or any of the likes. 
In assembly this will all look identical, but the C representation 
should be as self-documenting as possible.


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-08 17:34                             ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-08 17:34 UTC (permalink / raw)
  To: Cedric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/08/2014 06:23 PM, Cedric Le Goater wrote:
> On 01/02/2014 09:22 PM, Alexander Graf wrote:
>> On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:
>>
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>>
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>>
>>> Finally, the meaning of the 'is_bigendian' argument of the
>>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>>> slightly changed to represent an eventual reverse operation. This
>>> is used in conjunction with kvmppc_is_bigendian() to determine if
>>> the instruction being emulated should be byte-swapped.
>>>
>>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>>> ---
>>>
>>> Changes in v5:
>>>
>>> - changed register usage slightly (paulus@samba.org)
>>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>>>
>>> Changes in v3:
>>>
>>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>>    kvmppc_ld_inst(). (Alexander Graf)
>>>
>>> Changes in v2:
>>>
>>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>>    exit paths. (Paul Mackerras)
>>>
>>> - moved the byte swapping logic to kvmppc_handle_load() and
>>>    kvmppc_handle_load() by changing the is_bigendian parameter
>>>    meaning. (Paul Mackerras)
>>>
>>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++++
>>> arch/powerpc/kvm/book3s_segment.S       |    9 +++++++++
>>> arch/powerpc/kvm/emulate.c              |    1 -
>>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>>> 6 files changed, 43 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 22ec875..ac06434 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -283,7 +283,14 @@ static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
>>> static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>>> 			      u32 *ptr, bool data)
>>> {
>>> -	return kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>>> +	int ret;
>>> +
>>> +	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>>> +
>>> +	if (kvmppc_need_byteswap(vcpu))
>>> +		*ptr = swab32(*ptr);
>>> +
>>> +	return ret;
>>> }
>>>
>>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>>> index b15554a..3769a13 100644
>>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>>> @@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
>>>
>>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                              unsigned int rt, unsigned int bytes,
>>> -                              int is_bigendian);
>>> +			      unsigned int rt, unsigned int bytes,
>>> +			      int not_reverse);
>>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                               unsigned int rt, unsigned int bytes,
>>> -                               int is_bigendian);
>>> +			       unsigned int rt, unsigned int bytes,
>>> +			       int not_reverse);
>>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                               u64 val, unsigned int bytes, int is_bigendian);
>>> +			       u64 val, unsigned int bytes, int not_reverse);
>>>
>>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>>                                        struct kvm_vcpu *vcpu);
>>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> index 77f1baa..89d4fbe 100644
>>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> @@ -1404,10 +1404,19 @@ fast_interrupt_c_return:
>>> 	lwz	r8, 0(r10)
>>> 	mtmsrd	r3
>>>
>>> +	andi.	r0, r11, MSR_LE
>>> +
>>> 	/* Store the result */
>>> 	stw	r8, VCPU_LAST_INST(r9)
>>>
>>> +	beq	after_inst_store
>>> +
>>> +	/* Swap and store the result */
>>> +	addi	r4, r9, VCPU_LAST_INST
>>> +	stwbrx	r8, 0, r4
>>> +
>> On v4 Paul mentioned that it would be dramatically more simple to load
>> last_inst with host endianness and do any required fixups in
>> kvmppc_get_last_inst() and I tend to agree.
> Hmm, I am confused ... This is what the above code is doing : loading the
> guest last_inst with host endianness. Anyhow, I think I get what you mean.
>
>> That also renders patch 1/6 moot, as you would simply always have a
>> variable with the last_inst in host endianness and swap it regardless.
>>
>> Sorry to make you jump through so many iterations, but getting this
>> right is incredibly hard.
> It's ok. We are exploring alternatives. I rather talk about it and get
> this done.
>
>> Please rework the patches to not require any asm changes.
> ok. I will send a patch without the SLE support for which I think I don't
> fully understand the consequences.
>
>>> 	/* Unset guest mode. */
>>> +after_inst_store:
>>> 	li	r0, KVM_GUEST_MODE_HOST_HV
>>> 	stb	r0, HSTATE_IN_GUEST(r13)
>>> 	b	guest_exit_cont
>>> diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
>>> index 1abe478..a942390 100644
>>> --- a/arch/powerpc/kvm/book3s_segment.S
>>> +++ b/arch/powerpc/kvm/book3s_segment.S
>>> @@ -289,6 +289,15 @@ ld_last_inst:
>>> #endif
>>> 	stw	r0, SVCPU_LAST_INST(r13)
>>>
>>> +#ifdef CONFIG_PPC64
>>> +	andi.	r9, r4, MSR_LE
>>> +	beq	no_ld_last_inst
>>> +
>>> +	/* swap and store the result */
>>> +	addi	r9, r13, SVCPU_LAST_INST
>>> +	stwbrx	r0, 0, r9
>>> +#endif
>>> +
>>> no_ld_last_inst:
>>>
>>> 	/* Unset guest mode */
>>> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
>>> index 751cd45..5e38004 100644
>>> --- a/arch/powerpc/kvm/emulate.c
>>> +++ b/arch/powerpc/kvm/emulate.c
>>> @@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>>>   * lmw
>>>   * stmw
>>>   *
>>> - * XXX is_bigendian should depend on MMU mapping or MSR[LE]
>>>   */
>>> /* XXX Should probably auto-generate instruction decoding for a particular core
>>>   * from opcode tables in the future. */
>>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>>> index 07c0106..6950f2b 100644
>>> --- a/arch/powerpc/kvm/powerpc.c
>>> +++ b/arch/powerpc/kvm/powerpc.c
>>> @@ -625,9 +625,13 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>>> }
>>>
>>> int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>> -                       unsigned int rt, unsigned int bytes, int is_bigendian)
>>> +			unsigned int rt, unsigned int bytes, int not_reverse)
>> I'm not really happy with the "not_reverse" name. In the scope of this patch it's
>> reasonably obvious what it tries to describe, but consider someone looking at this
>> function without a clue where we're swizzling endianness. The name doesn't even mention
>> endianness.
>>
>> Naming is really hard.
> yes. we should leave 'is_bigendian'.
>
>> How does "is_default_endian" sound? Then you can change the code below ...
>>
>>> {
>>> 	int idx, ret;
>>> +	int is_bigendian = not_reverse;
>>> +
>>> +	if (!kvmppc_is_bigendian(vcpu))
>>> +		is_bigendian = !not_reverse;
>> ... to
>>
>> if (kvmppc_is_bigendian(vcpu)) {
>>      /* Default endianness is "big endian". */
>>      is_bigendian = is_default_endian;
>> } else {
>>      /* Default endianness is "little endian". */
>>      is_bigendian = !is_default_endian;
>> }
>>
>> and suddenly things become reasonably clear for everyone I'd hope.
> I think something like :
>
> +	if (kvmppc_need_byteswap(vcpu))
> +		is_bigendian = !is_bigendian;
> +
>
> has a small footprint and is clear enough ?
>
> Thanks for the inputs, a (single) patch follows

Not really. The argument means "use the normal endianness you would 
usually use for memory access". It doesn't mean little or big endian 
yet, as that's what we determine later.

Keep in mind that gcc is really good at optimizing code like this, so 
please don't try to be smart with variable reusage or any of the likes. 
In assembly this will all look identical, but the C representation 
should be as self-documenting as possible.


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v6] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-08 17:23                           ` Cedric Le Goater
@ 2014-01-08 17:35                             ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2014-01-08 17:35 UTC (permalink / raw)
  To: agraf; +Cc: paulus, kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 (plus a h_set_mode hack)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswaps last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)
	

 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |    6 ++++++
 4 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index bc23b1ba7980..00499f5f16bc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
+static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
+{
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 /*
@@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..ff10fba29878 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..b2adea28f2f0 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -677,6 +677,9 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 {
 	int idx, ret;
 
+	if (kvmppc_need_byteswap(vcpu))
+		is_bigendian = !is_bigendian;
+
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
 		       run->mmio.len);
@@ -727,6 +730,9 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	void *data = run->mmio.data;
 	int idx, ret;
 
+	if (kvmppc_need_byteswap(vcpu))
+		is_bigendian = !is_bigendian;
+
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
 		       run->mmio.len);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v6] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-08 17:35                             ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2014-01-08 17:35 UTC (permalink / raw)
  To: agraf; +Cc: paulus, kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 (plus a h_set_mode hack)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswaps last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)
	

 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |    6 ++++++
 4 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index bc23b1ba7980..00499f5f16bc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
+static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
+{
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 /*
@@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..ff10fba29878 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..b2adea28f2f0 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -677,6 +677,9 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 {
 	int idx, ret;
 
+	if (kvmppc_need_byteswap(vcpu))
+		is_bigendian = !is_bigendian;
+
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
 		       run->mmio.len);
@@ -727,6 +730,9 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	void *data = run->mmio.data;
 	int idx, ret;
 
+	if (kvmppc_need_byteswap(vcpu))
+		is_bigendian = !is_bigendian;
+
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
 		       run->mmio.len);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-08 17:34                             ` Alexander Graf
@ 2014-01-08 17:40                               ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-08 17:40 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/08/2014 06:34 PM, Alexander Graf wrote:
>>> if (kvmppc_is_bigendian(vcpu)) {
>>>      /* Default endianness is "big endian". */
>>>      is_bigendian = is_default_endian;
>>> } else {
>>>      /* Default endianness is "little endian". */
>>>      is_bigendian = !is_default_endian;
>>> }
>>>
>>> and suddenly things become reasonably clear for everyone I'd hope.
>> I think something like :
>>
>> +    if (kvmppc_need_byteswap(vcpu))
>> +        is_bigendian = !is_bigendian;
>> +
>>
>> has a small footprint and is clear enough ?
>>
>> Thanks for the inputs, a (single) patch follows
> 
> Not really. The argument means "use the normal endianness you would usually use for memory access". It doesn't mean little or big endian yet, as that's what we determine later.
> 
> Keep in mind that gcc is really good at optimizing code like this, so please don't try to be smart with variable reusage or any of the likes. In assembly this will all look identical, but the C representation should be as self-documenting as possible.

Arg. I should have waited a few minutes. No problem. I will resend 
with your "is_default_endian" proposal. 

Cheers,

C.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v5 3/6]  KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-08 17:40                               ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-08 17:40 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/08/2014 06:34 PM, Alexander Graf wrote:
>>> if (kvmppc_is_bigendian(vcpu)) {
>>>      /* Default endianness is "big endian". */
>>>      is_bigendian = is_default_endian;
>>> } else {
>>>      /* Default endianness is "little endian". */
>>>      is_bigendian = !is_default_endian;
>>> }
>>>
>>> and suddenly things become reasonably clear for everyone I'd hope.
>> I think something like :
>>
>> +    if (kvmppc_need_byteswap(vcpu))
>> +        is_bigendian = !is_bigendian;
>> +
>>
>> has a small footprint and is clear enough ?
>>
>> Thanks for the inputs, a (single) patch follows
> 
> Not really. The argument means "use the normal endianness you would usually use for memory access". It doesn't mean little or big endian yet, as that's what we determine later.
> 
> Keep in mind that gcc is really good at optimizing code like this, so please don't try to be smart with variable reusage or any of the likes. In assembly this will all look identical, but the C representation should be as self-documenting as possible.

Arg. I should have waited a few minutes. No problem. I will resend 
with your "is_default_endian" proposal. 

Cheers,

C.




^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-08 17:23                           ` Cedric Le Goater
@ 2014-01-09 10:02                             ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2014-01-09 10:02 UTC (permalink / raw)
  To: agraf; +Cc: paulus, kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

How's that ? As the changes were small, I kept them in one patch 
but I can split if necessary. 

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 (plus a h_set_mode hack)

Cheers,

C.

Changes in v7:

 - replaced is_bigendian by is_default_endian (Alexander Graf)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswap last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)
	
 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
 5 files changed, 42 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index bc23b1ba7980..00499f5f16bc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
+static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
+{
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 /*
@@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c8317fbf92c4..629277df4798 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
                               unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      int is_default_endian);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes,
+			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..ff10fba29878 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..053c92fb55d9 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -673,9 +673,19 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+		       unsigned int rt, unsigned int bytes,
+		       int is_default_endian)
 {
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -711,21 +721,31 @@ EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes,
+			int is_default_endian)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-09 10:02                             ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2014-01-09 10:02 UTC (permalink / raw)
  To: agraf; +Cc: paulus, kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

How's that ? As the changes were small, I kept them in one patch 
but I can split if necessary. 

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 (plus a h_set_mode hack)

Cheers,

C.

Changes in v7:

 - replaced is_bigendian by is_default_endian (Alexander Graf)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswap last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)
	
 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
 5 files changed, 42 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index bc23b1ba7980..00499f5f16bc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
+static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
+{
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 /*
@@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c8317fbf92c4..629277df4798 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
                               unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      int is_default_endian);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes,
+			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..ff10fba29878 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..053c92fb55d9 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -673,9 +673,19 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+		       unsigned int rt, unsigned int bytes,
+		       int is_default_endian)
 {
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -711,21 +721,31 @@ EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes,
+			int is_default_endian)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-09 10:02                             ` Cédric Le Goater
@ 2014-01-09 10:17                               ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-09 10:17 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 09.01.2014, at 11:02, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian order,
> or more generally in a different endian order of the host, the
> instruction needs to be byte-swapped before being emulated.
> 
> This patch adds a helper routine which tests the endian order of
> the host and the guest in order to decide whether a byteswap is
> needed or not. It is then used to byteswap the last instruction
> of the guest in the endian order of the host before MMIO emulation
> is performed.
> 
> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
> to reverse the endianness of the MMIO if required.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> How's that ? As the changes were small, I kept them in one patch 
> but I can split if necessary. 
> 
> This patch was tested for Big Endian and Little Endian HV guests 
> and Big Endian PR guests on 3.13 (plus a h_set_mode hack)
> 
> Cheers,
> 
> C.
> 
> Changes in v7:
> 
> - replaced is_bigendian by is_default_endian (Alexander Graf)
> 
> Changes in v6:
> 
> - removed asm changes (Alexander Graf)
> - byteswap last_inst when used in kvmppc_get_last_inst()
> - postponed Split Little Endian support
> 
> Changes in v5:
> 
> - changed register usage slightly (paulus@samba.org)
> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
> - added support for little endian host
> - added support for Split Little Endian (SLE)
> 
> Changes in v4:
> 
> - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 	
> arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
> arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
> arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
> arch/powerpc/kvm/emulate.c            |    1 -
> arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
> 5 files changed, 42 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index bc23b1ba7980..00499f5f16bc 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 	return vcpu->arch.pc;
> }
> 
> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> +{
> +	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
> +}
> +
> +static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
> +{
> +	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
> +		vcpu->arch.last_inst;
> +}
> +
> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> {
> 	ulong pc = kvmppc_get_pc(vcpu);
> @@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> 
> -	return vcpu->arch.last_inst;

I would prefer if you just explicitly put the contents of kvmppc_byteswap_last_inst() here.

> +	return kvmppc_byteswap_last_inst(vcpu);
> }
> 
> /*
> @@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)

... and instead converge the two functions into one. In fact, let me quickly hack up a patch for that.

> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> 
> -	return vcpu->arch.last_inst;
> +	return kvmppc_byteswap_last_inst(vcpu);
> }
> 
> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index c8317fbf92c4..629277df4798 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>                               unsigned int rt, unsigned int bytes,
> -                              int is_bigendian);
> +			      int is_default_endian);
> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>                                unsigned int rt, unsigned int bytes,
> -                               int is_bigendian);
> +			       int is_default_endian);
> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               u64 val, unsigned int bytes, int is_bigendian);
> +			       u64 val, unsigned int bytes,
> +			       int is_default_endian);
> 
> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>                                       struct kvm_vcpu *vcpu);
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 79e992d8c823..ff10fba29878 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
> 	 * we just return and retry the instruction.
> 	 */
> 
> -	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
> +	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)

This can safely be kvmppc_get_last_inst() at this point, because we're definitely not hitting the KVM_INST_FETCH_FAILED code path anymore.


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-09 10:17                               ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-09 10:17 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 09.01.2014, at 11:02, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian order,
> or more generally in a different endian order of the host, the
> instruction needs to be byte-swapped before being emulated.
> 
> This patch adds a helper routine which tests the endian order of
> the host and the guest in order to decide whether a byteswap is
> needed or not. It is then used to byteswap the last instruction
> of the guest in the endian order of the host before MMIO emulation
> is performed.
> 
> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
> to reverse the endianness of the MMIO if required.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> How's that ? As the changes were small, I kept them in one patch 
> but I can split if necessary. 
> 
> This patch was tested for Big Endian and Little Endian HV guests 
> and Big Endian PR guests on 3.13 (plus a h_set_mode hack)
> 
> Cheers,
> 
> C.
> 
> Changes in v7:
> 
> - replaced is_bigendian by is_default_endian (Alexander Graf)
> 
> Changes in v6:
> 
> - removed asm changes (Alexander Graf)
> - byteswap last_inst when used in kvmppc_get_last_inst()
> - postponed Split Little Endian support
> 
> Changes in v5:
> 
> - changed register usage slightly (paulus@samba.org)
> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
> - added support for little endian host
> - added support for Split Little Endian (SLE)
> 
> Changes in v4:
> 
> - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 	
> arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
> arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
> arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
> arch/powerpc/kvm/emulate.c            |    1 -
> arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
> 5 files changed, 42 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index bc23b1ba7980..00499f5f16bc 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 	return vcpu->arch.pc;
> }
> 
> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> +{
> +	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
> +}
> +
> +static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
> +{
> +	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
> +		vcpu->arch.last_inst;
> +}
> +
> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> {
> 	ulong pc = kvmppc_get_pc(vcpu);
> @@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> 
> -	return vcpu->arch.last_inst;

I would prefer if you just explicitly put the contents of kvmppc_byteswap_last_inst() here.

> +	return kvmppc_byteswap_last_inst(vcpu);
> }
> 
> /*
> @@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)

... and instead converge the two functions into one. In fact, let me quickly hack up a patch for that.

> 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> 
> -	return vcpu->arch.last_inst;
> +	return kvmppc_byteswap_last_inst(vcpu);
> }
> 
> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index c8317fbf92c4..629277df4798 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>                               unsigned int rt, unsigned int bytes,
> -                              int is_bigendian);
> +			      int is_default_endian);
> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>                                unsigned int rt, unsigned int bytes,
> -                               int is_bigendian);
> +			       int is_default_endian);
> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               u64 val, unsigned int bytes, int is_bigendian);
> +			       u64 val, unsigned int bytes,
> +			       int is_default_endian);
> 
> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>                                       struct kvm_vcpu *vcpu);
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 79e992d8c823..ff10fba29878 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
> 	 * we just return and retry the instruction.
> 	 */
> 
> -	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
> +	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)

This can safely be kvmppc_get_last_inst() at this point, because we're definitely not hitting the KVM_INST_FETCH_FAILED code path anymore.


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-09 10:17                               ` Alexander Graf
@ 2014-01-09 10:33                                 ` Cedric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-09 10:33 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/09/2014 11:17 AM, Alexander Graf wrote:
> 
> On 09.01.2014, at 11:02, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian order,
>> or more generally in a different endian order of the host, the
>> instruction needs to be byte-swapped before being emulated.
>>
>> This patch adds a helper routine which tests the endian order of
>> the host and the guest in order to decide whether a byteswap is
>> needed or not. It is then used to byteswap the last instruction
>> of the guest in the endian order of the host before MMIO emulation
>> is performed.
>>
>> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
>> to reverse the endianness of the MMIO if required.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> How's that ? As the changes were small, I kept them in one patch 
>> but I can split if necessary. 
>>
>> This patch was tested for Big Endian and Little Endian HV guests 
>> and Big Endian PR guests on 3.13 (plus a h_set_mode hack)
>>
>> Cheers,
>>
>> C.
>>
>> Changes in v7:
>>
>> - replaced is_bigendian by is_default_endian (Alexander Graf)
>>
>> Changes in v6:
>>
>> - removed asm changes (Alexander Graf)
>> - byteswap last_inst when used in kvmppc_get_last_inst()
>> - postponed Split Little Endian support
>>
>> Changes in v5:
>>
>> - changed register usage slightly (paulus@samba.org)
>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>> - added support for little endian host
>> - added support for Split Little Endian (SLE)
>>
>> Changes in v4:
>>
>> - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>> 	
>> arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
>> arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
>> arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
>> arch/powerpc/kvm/emulate.c            |    1 -
>> arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
>> 5 files changed, 42 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index bc23b1ba7980..00499f5f16bc 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
>> 	return vcpu->arch.pc;
>> }
>>
>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>> +{
>> +	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
>> +}
>> +
>> +static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
>> +{
>> +	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
>> +		vcpu->arch.last_inst;
>> +}
>> +
>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> {
>> 	ulong pc = kvmppc_get_pc(vcpu);
>> @@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>
>> -	return vcpu->arch.last_inst;
> 
> I would prefer if you just explicitly put the contents of kvmppc_byteswap_last_inst() here.
> 
>> +	return kvmppc_byteswap_last_inst(vcpu);
>> }
>>
>> /*
>> @@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
> 
> ... and instead converge the two functions into one. In fact, let me quickly hack up a patch for that.

OK. I am taking the patch you just sent and work on a v8. 

>> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>
>> -	return vcpu->arch.last_inst;
>> +	return kvmppc_byteswap_last_inst(vcpu);
>> }
>>
>> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>> index c8317fbf92c4..629277df4798 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>                               unsigned int rt, unsigned int bytes,
>> -                              int is_bigendian);
>> +			      int is_default_endian);
>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>                                unsigned int rt, unsigned int bytes,
>> -                               int is_bigendian);
>> +			       int is_default_endian);
>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               u64 val, unsigned int bytes, int is_bigendian);
>> +			       u64 val, unsigned int bytes,
>> +			       int is_default_endian);
>>
>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>                                       struct kvm_vcpu *vcpu);
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 79e992d8c823..ff10fba29878 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> 	 * we just return and retry the instruction.
>> 	 */
>>
>> -	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
>> +	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
> 
> This can safely be kvmppc_get_last_inst() at this point, because we're definitely not hitting the KVM_INST_FETCH_FAILED code path anymore.

OK.

Thanks,

C. 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-09 10:33                                 ` Cedric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cedric Le Goater @ 2014-01-09 10:33 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list

On 01/09/2014 11:17 AM, Alexander Graf wrote:
> 
> On 09.01.2014, at 11:02, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian order,
>> or more generally in a different endian order of the host, the
>> instruction needs to be byte-swapped before being emulated.
>>
>> This patch adds a helper routine which tests the endian order of
>> the host and the guest in order to decide whether a byteswap is
>> needed or not. It is then used to byteswap the last instruction
>> of the guest in the endian order of the host before MMIO emulation
>> is performed.
>>
>> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
>> to reverse the endianness of the MMIO if required.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> How's that ? As the changes were small, I kept them in one patch 
>> but I can split if necessary. 
>>
>> This patch was tested for Big Endian and Little Endian HV guests 
>> and Big Endian PR guests on 3.13 (plus a h_set_mode hack)
>>
>> Cheers,
>>
>> C.
>>
>> Changes in v7:
>>
>> - replaced is_bigendian by is_default_endian (Alexander Graf)
>>
>> Changes in v6:
>>
>> - removed asm changes (Alexander Graf)
>> - byteswap last_inst when used in kvmppc_get_last_inst()
>> - postponed Split Little Endian support
>>
>> Changes in v5:
>>
>> - changed register usage slightly (paulus@samba.org)
>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>> - added support for little endian host
>> - added support for Split Little Endian (SLE)
>>
>> Changes in v4:
>>
>> - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>> 	
>> arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
>> arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
>> arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
>> arch/powerpc/kvm/emulate.c            |    1 -
>> arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
>> 5 files changed, 42 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index bc23b1ba7980..00499f5f16bc 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
>> 	return vcpu->arch.pc;
>> }
>>
>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>> +{
>> +	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
>> +}
>> +
>> +static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
>> +{
>> +	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
>> +		vcpu->arch.last_inst;
>> +}
>> +
>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> {
>> 	ulong pc = kvmppc_get_pc(vcpu);
>> @@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
>> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>
>> -	return vcpu->arch.last_inst;
> 
> I would prefer if you just explicitly put the contents of kvmppc_byteswap_last_inst() here.
> 
>> +	return kvmppc_byteswap_last_inst(vcpu);
>> }
>>
>> /*
>> @@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
> 
> ... and instead converge the two functions into one. In fact, let me quickly hack up a patch for that.

OK. I am taking the patch you just sent and work on a v8. 

>> 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
>> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>
>> -	return vcpu->arch.last_inst;
>> +	return kvmppc_byteswap_last_inst(vcpu);
>> }
>>
>> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>> index c8317fbf92c4..629277df4798 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>                               unsigned int rt, unsigned int bytes,
>> -                              int is_bigendian);
>> +			      int is_default_endian);
>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>                                unsigned int rt, unsigned int bytes,
>> -                               int is_bigendian);
>> +			       int is_default_endian);
>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               u64 val, unsigned int bytes, int is_bigendian);
>> +			       u64 val, unsigned int bytes,
>> +			       int is_default_endian);
>>
>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>                                       struct kvm_vcpu *vcpu);
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 79e992d8c823..ff10fba29878 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> 	 * we just return and retry the instruction.
>> 	 */
>>
>> -	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
>> +	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
> 
> This can safely be kvmppc_get_last_inst() at this point, because we're definitely not hitting the KVM_INST_FETCH_FAILED code path anymore.

OK.

Thanks,

C. 


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [PATCH v8] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-09 10:33                                 ` Cedric Le Goater
@ 2014-01-09 10:51                                   ` Cédric Le Goater
  -1 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2014-01-09 10:51 UTC (permalink / raw)
  To: agraf; +Cc: paulus, kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
How's that ? As the changes were small, I kept them in one patch 
but I can split if necessary. 

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 plus these patches :

 - http://patchwork.ozlabs.org/patch/308545/
 - h_set_mode internal patch to support LE guests


Cheers,

C.

Changes in v8:

 - merged kvmppc_byteswap_last_inst() in kvmppc_get_last_inst()	
   (Alexander Graf)
 - depends on http://patchwork.ozlabs.org/patch/308545/

Changes in v7:

 - replaced is_bigendian by is_default_endian (Alexander Graf)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswap last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h |    8 +++++++-
 arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
 5 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index db5571f900cd..08d1263c8620 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,11 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
 static inline u32 kvmppc_get_last_inst_internal(struct kvm_vcpu *vcpu, ulong pc)
 {
 	/* Load the instruction manually if it failed to do so in the
@@ -278,7 +283,8 @@ static inline u32 kvmppc_get_last_inst_internal(struct kvm_vcpu *vcpu, ulong pc)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c8317fbf92c4..629277df4798 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
                               unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      int is_default_endian);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes,
+			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..303ece75b8e4 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_get_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..053c92fb55d9 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -673,9 +673,19 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+		       unsigned int rt, unsigned int bytes,
+		       int is_default_endian)
 {
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -711,21 +721,31 @@ EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes,
+			int is_default_endian)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH v8] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-09 10:51                                   ` Cédric Le Goater
  0 siblings, 0 replies; 94+ messages in thread
From: Cédric Le Goater @ 2014-01-09 10:51 UTC (permalink / raw)
  To: agraf; +Cc: paulus, kvm-ppc, kvm, Cédric Le Goater

MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
How's that ? As the changes were small, I kept them in one patch 
but I can split if necessary. 

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 plus these patches :

 - http://patchwork.ozlabs.org/patch/308545/
 - h_set_mode internal patch to support LE guests


Cheers,

C.

Changes in v8:

 - merged kvmppc_byteswap_last_inst() in kvmppc_get_last_inst()	
   (Alexander Graf)
 - depends on http://patchwork.ozlabs.org/patch/308545/

Changes in v7:

 - replaced is_bigendian by is_default_endian (Alexander Graf)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswap last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h |    8 +++++++-
 arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
 5 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index db5571f900cd..08d1263c8620 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,11 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
 static inline u32 kvmppc_get_last_inst_internal(struct kvm_vcpu *vcpu, ulong pc)
 {
 	/* Load the instruction manually if it failed to do so in the
@@ -278,7 +283,8 @@ static inline u32 kvmppc_get_last_inst_internal(struct kvm_vcpu *vcpu, ulong pc)
 	if (vcpu->arch.last_inst = KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c8317fbf92c4..629277df4798 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
                               unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      int is_default_endian);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes,
+			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..303ece75b8e4 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_get_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..053c92fb55d9 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -673,9 +673,19 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+		       unsigned int rt, unsigned int bytes,
+		       int is_default_endian)
 {
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -711,21 +721,31 @@ EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes,
+			int is_default_endian)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [PATCH v8] KVM: PPC: Book3S: MMIO emulation support for little endian guests
  2014-01-09 10:51                                   ` Cédric Le Goater
@ 2014-01-09 10:55                                     ` Alexander Graf
  -1 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-09 10:55 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 09.01.2014, at 11:51, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian order,
> or more generally in a different endian order of the host, the
> instruction needs to be byte-swapped before being emulated.
> 
> This patch adds a helper routine which tests the endian order of
> the host and the guest in order to decide whether a byteswap is
> needed or not. It is then used to byteswap the last instruction
> of the guest in the endian order of the host before MMIO emulation
> is performed.
> 
> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
> to reverse the endianness of the MMIO if required.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> How's that ? As the changes were small, I kept them in one patch 
> but I can split if necessary. 

Very nice, thanks a lot. Applied to kvm-ppc-queue.


Alex

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [PATCH v8] KVM: PPC: Book3S: MMIO emulation support for little endian guests
@ 2014-01-09 10:55                                     ` Alexander Graf
  0 siblings, 0 replies; 94+ messages in thread
From: Alexander Graf @ 2014-01-09 10:55 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Paul Mackerras, kvm-ppc, kvm@vger.kernel.org mailing list


On 09.01.2014, at 11:51, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian order,
> or more generally in a different endian order of the host, the
> instruction needs to be byte-swapped before being emulated.
> 
> This patch adds a helper routine which tests the endian order of
> the host and the guest in order to decide whether a byteswap is
> needed or not. It is then used to byteswap the last instruction
> of the guest in the endian order of the host before MMIO emulation
> is performed.
> 
> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
> to reverse the endianness of the MMIO if required.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> How's that ? As the changes were small, I kept them in one patch 
> but I can split if necessary. 

Very nice, thanks a lot. Applied to kvm-ppc-queue.


Alex


^ permalink raw reply	[flat|nested] 94+ messages in thread

end of thread, other threads:[~2014-01-09 10:55 UTC | newest]

Thread overview: 94+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-10-08 14:12 [PATCH v2 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests Cédric Le Goater
2013-10-08 14:12 ` Cédric Le Goater
2013-10-08 14:12 ` [PATCH v2 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions Cédric Le Goater
2013-10-08 14:12   ` Cédric Le Goater
2013-10-08 14:12 ` [PATCH v2 2/3] KVM: PPC: Book3S: add helper routines to detect endian Cédric Le Goater
2013-10-08 14:12   ` Cédric Le Goater
2013-10-08 14:12 ` [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests Cédric Le Goater
2013-10-08 14:12   ` Cédric Le Goater
2013-10-08 14:25   ` Alexander Graf
2013-10-08 14:25     ` Alexander Graf
2013-10-08 15:07     ` Cedric Le Goater
2013-10-08 15:07       ` Cedric Le Goater
2013-10-08 15:31       ` [PATCH v3 " Cédric Le Goater
2013-10-08 15:31         ` Cédric Le Goater
2013-10-08 15:36         ` Alexander Graf
2013-10-08 15:36           ` Alexander Graf
2013-10-08 16:10           ` Cedric Le Goater
2013-10-08 16:10             ` Cedric Le Goater
2013-10-08 16:43             ` [PATCH v4 0/3] KVM: PPC: Book3S: MMIO support for Little Endian guests Cédric Le Goater
2013-10-08 16:43               ` Cédric Le Goater
2013-10-08 16:43             ` [PATCH v4 1/3] KVM: PPC: Book3S: add helper routine to load guest instructions Cédric Le Goater
2013-10-08 16:43               ` Cédric Le Goater
2013-10-08 16:43             ` [PATCH v4 2/3] KVM: PPC: Book3S: add helper routines to detect endian order Cédric Le Goater
2013-10-08 16:43               ` Cédric Le Goater
2013-10-08 16:43             ` [PATCH v4 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests Cédric Le Goater
2013-10-08 16:43               ` Cédric Le Goater
2013-10-08 23:31     ` [PATCH v2 " Paul Mackerras
2013-10-08 23:31       ` Paul Mackerras
2013-10-08 23:46       ` Alexander Graf
2013-10-08 23:46         ` Alexander Graf
2013-10-09  5:59         ` Paul Mackerras
2013-10-09  5:59           ` Paul Mackerras
2013-10-09  8:29           ` Alexander Graf
2013-10-09  8:29             ` Alexander Graf
2013-10-09  8:42             ` Cedric Le Goater
2013-10-09  8:42               ` Cedric Le Goater
2013-10-10 10:16             ` Paul Mackerras
2013-10-10 10:16               ` Paul Mackerras
2013-11-04 11:44               ` Alexander Graf
2013-11-04 11:44                 ` Alexander Graf
2013-11-05 12:28                 ` Cedric Le Goater
2013-11-05 12:28                   ` Cedric Le Goater
2013-11-05 13:01                   ` Alexander Graf
2013-11-05 13:01                     ` Alexander Graf
2013-11-05 17:22                     ` [PATCH v5 0/6] KVM: PPC: Book3S: MMIO support for Little Endian guests Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2013-11-05 17:22                     ` [PATCH v5 1/6] KVM: PPC: Book3S: add helper routine to load guest instructions Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2013-11-05 17:22                     ` [PATCH v5 2/6] KVM: PPC: Book3S: add helper routines to detect endian Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2014-01-02 20:05                       ` Alexander Graf
2014-01-02 20:05                         ` Alexander Graf
2014-01-08 17:22                         ` Cedric Le Goater
2014-01-08 17:22                           ` Cedric Le Goater
2013-11-05 17:22                     ` [PATCH v5 3/6] KVM: PPC: Book3S: MMIO emulation support for little endian guests Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2014-01-02 20:22                       ` Alexander Graf
2014-01-02 20:22                         ` Alexander Graf
2014-01-08 17:23                         ` Cedric Le Goater
2014-01-08 17:23                           ` Cedric Le Goater
2014-01-08 17:34                           ` Alexander Graf
2014-01-08 17:34                             ` Alexander Graf
2014-01-08 17:40                             ` Cedric Le Goater
2014-01-08 17:40                               ` Cedric Le Goater
2014-01-08 17:35                           ` [PATCH v6] " Cédric Le Goater
2014-01-08 17:35                             ` Cédric Le Goater
2014-01-09 10:02                           ` [PATCH v7] " Cédric Le Goater
2014-01-09 10:02                             ` Cédric Le Goater
2014-01-09 10:17                             ` Alexander Graf
2014-01-09 10:17                               ` Alexander Graf
2014-01-09 10:33                               ` Cedric Le Goater
2014-01-09 10:33                                 ` Cedric Le Goater
2014-01-09 10:51                                 ` [PATCH v8] " Cédric Le Goater
2014-01-09 10:51                                   ` Cédric Le Goater
2014-01-09 10:55                                   ` Alexander Graf
2014-01-09 10:55                                     ` Alexander Graf
2013-11-05 17:22                     ` [PATCH v5 4/6] KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2013-11-08 14:36                       ` [PATCH v5.1 " Cedric Le Goater
2013-11-08 14:36                         ` Cedric Le Goater
2014-01-02 20:28                         ` Alexander Graf
2014-01-02 20:28                           ` Alexander Graf
2014-01-02 20:25                       ` [PATCH v5 " Alexander Graf
2014-01-02 20:25                         ` Alexander Graf
2013-11-05 17:22                     ` [PATCH v5 5/6] powerpc: add Split Little Endian bit to MSR Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2013-11-05 17:22                     ` [PATCH v5 6/6] KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian Cédric Le Goater
2013-11-05 17:22                       ` Cédric Le Goater
2014-01-02 20:26                       ` Alexander Graf
2014-01-02 20:26                         ` Alexander Graf
2013-11-06  5:55                     ` [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests Paul Mackerras
2013-11-06  5:55                       ` Paul Mackerras
2013-11-08 14:29                       ` Cedric Le Goater
2013-11-08 14:29                         ` Cedric Le Goater

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.