All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG
@ 2019-05-03  5:53 ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

This patch series adds the necessary parts so that a tcg guest is able to use
kvm facilities. That is a tcg guest can boot its own kvm guests.

The main requirements for this were a few registers and instructions as well as
some hcalls and the addition of partition scoped translation in the radix mmu
emulation.

This can be used to boot a kvm guest under a pseries tcg guest:
Use power9_v2.2 cpu and add -machine cap-nested-hv=on for the first guest.
Then inside that guest boot a kvm guest as normal.
This takes advantage of the new hcalls with qemu emulating them as a normal
hypervisor would on a real machine.

This can also be used to boot a kvm guest under a powernv tcg guest:
Use any power9 cpu type.
This takes advantage of the new hv register access added.
Note that for powernv there is no xive interrupt excalation for KVM which means
that while the guest will boot, it won't receive any interrupts.

Suraj Jitindar Singh (13):
  target/ppc: Implement the VTB for HV access
  target/ppc: Work [S]PURR implementation and add HV support
  target/ppc: Add SPR ASDR
  target/ppc: Add SPR TBU40
  target/ppc: Add privileged message send facilities
  target/ppc: Enforce that the root page directory size must be at least
    5
  target/ppc: Handle partition scoped radix tree translation
  target/ppc: Implement hcall H_SET_PARTITION_TABLE
  target/ppc: Implement hcall H_ENTER_NESTED
  target/ppc: Implement hcall H_TLB_INVALIDATE
  target/ppc: Implement hcall H_COPY_TOFROM_GUEST
  target/ppc: Introduce POWER9 DD2.2 cpu type
  target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg

 hw/ppc/ppc.c                    |  46 ++++-
 hw/ppc/spapr_caps.c             |  22 ++-
 hw/ppc/spapr_cpu_core.c         |   1 +
 hw/ppc/spapr_hcall.c            | 409 +++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/ppc.h            |   4 +-
 include/hw/ppc/spapr.h          |   7 +-
 linux-user/ppc/cpu_loop.c       |   5 +
 target/ppc/cpu-models.c         |   2 +
 target/ppc/cpu-models.h         |   1 +
 target/ppc/cpu.h                |  70 +++++++
 target/ppc/excp_helper.c        |  79 +++++++-
 target/ppc/helper.h             |   9 +
 target/ppc/misc_helper.c        |  46 +++++
 target/ppc/mmu-radix64.c        | 412 ++++++++++++++++++++++++++++------------
 target/ppc/mmu-radix64.h        |   4 +
 target/ppc/timebase_helper.c    |  20 ++
 target/ppc/translate.c          |  28 +++
 target/ppc/translate_init.inc.c | 107 +++++++++--
 18 files changed, 1115 insertions(+), 157 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG
@ 2019-05-03  5:53 ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

This patch series adds the necessary parts so that a tcg guest is able to use
kvm facilities. That is a tcg guest can boot its own kvm guests.

The main requirements for this were a few registers and instructions as well as
some hcalls and the addition of partition scoped translation in the radix mmu
emulation.

This can be used to boot a kvm guest under a pseries tcg guest:
Use power9_v2.2 cpu and add -machine cap-nested-hv=on for the first guest.
Then inside that guest boot a kvm guest as normal.
This takes advantage of the new hcalls with qemu emulating them as a normal
hypervisor would on a real machine.

This can also be used to boot a kvm guest under a powernv tcg guest:
Use any power9 cpu type.
This takes advantage of the new hv register access added.
Note that for powernv there is no xive interrupt excalation for KVM which means
that while the guest will boot, it won't receive any interrupts.

Suraj Jitindar Singh (13):
  target/ppc: Implement the VTB for HV access
  target/ppc: Work [S]PURR implementation and add HV support
  target/ppc: Add SPR ASDR
  target/ppc: Add SPR TBU40
  target/ppc: Add privileged message send facilities
  target/ppc: Enforce that the root page directory size must be at least
    5
  target/ppc: Handle partition scoped radix tree translation
  target/ppc: Implement hcall H_SET_PARTITION_TABLE
  target/ppc: Implement hcall H_ENTER_NESTED
  target/ppc: Implement hcall H_TLB_INVALIDATE
  target/ppc: Implement hcall H_COPY_TOFROM_GUEST
  target/ppc: Introduce POWER9 DD2.2 cpu type
  target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg

 hw/ppc/ppc.c                    |  46 ++++-
 hw/ppc/spapr_caps.c             |  22 ++-
 hw/ppc/spapr_cpu_core.c         |   1 +
 hw/ppc/spapr_hcall.c            | 409 +++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/ppc.h            |   4 +-
 include/hw/ppc/spapr.h          |   7 +-
 linux-user/ppc/cpu_loop.c       |   5 +
 target/ppc/cpu-models.c         |   2 +
 target/ppc/cpu-models.h         |   1 +
 target/ppc/cpu.h                |  70 +++++++
 target/ppc/excp_helper.c        |  79 +++++++-
 target/ppc/helper.h             |   9 +
 target/ppc/misc_helper.c        |  46 +++++
 target/ppc/mmu-radix64.c        | 412 ++++++++++++++++++++++++++++------------
 target/ppc/mmu-radix64.h        |   4 +
 target/ppc/timebase_helper.c    |  20 ++
 target/ppc/translate.c          |  28 +++
 target/ppc/translate_init.inc.c | 107 +++++++++--
 18 files changed, 1115 insertions(+), 157 deletions(-)

-- 
2.13.6



^ permalink raw reply	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 01/13] target/ppc: Implement the VTB for HV access
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The virtual timebase register (VTB) is a 64-bit register which
increments at the same rate as the timebase register, present on POWER8
and later processors.

The register is able to be read/written by the hypervisor and read by
the supervisor. All other accesses are illegal.

Currently the VTB is just an alias for the timebase (TB) register.

Implement the VTB so that is can be read/written independent of the TB.
Make use of the existing method for accessing timebase facilities where
by the compensation is stored and used to compute the value on reads/is
updated on writes.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/ppc.c                    | 16 ++++++++++++++++
 include/hw/ppc/ppc.h            |  1 +
 linux-user/ppc/cpu_loop.c       |  5 +++++
 target/ppc/cpu.h                |  2 ++
 target/ppc/helper.h             |  2 ++
 target/ppc/timebase_helper.c    | 10 ++++++++++
 target/ppc/translate_init.inc.c | 19 +++++++++++++++----
 7 files changed, 51 insertions(+), 4 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index b2ff99ec66..a57ca64626 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -694,6 +694,22 @@ void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value)
                      &tb_env->atb_offset, ((uint64_t)value << 32) | tb);
 }
 
+uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
+{
+    ppc_tb_t *tb_env = env->tb_env;
+
+    return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                          tb_env->vtb_offset);
+}
+
+void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
+{
+    ppc_tb_t *tb_env = env->tb_env;
+
+    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                     &tb_env->vtb_offset, value);
+}
+
 static void cpu_ppc_tb_stop (CPUPPCState *env)
 {
     ppc_tb_t *tb_env = env->tb_env;
diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
index 4bdcb8bacd..205150e6b4 100644
--- a/include/hw/ppc/ppc.h
+++ b/include/hw/ppc/ppc.h
@@ -23,6 +23,7 @@ struct ppc_tb_t {
     /* Time base management */
     int64_t  tb_offset;    /* Compensation                    */
     int64_t  atb_offset;   /* Compensation                    */
+    int64_t  vtb_offset;
     uint32_t tb_freq;      /* TB frequency                    */
     /* Decrementer management */
     uint64_t decr_next;    /* Tick for next decr interrupt    */
diff --git a/linux-user/ppc/cpu_loop.c b/linux-user/ppc/cpu_loop.c
index 801f5ace29..c715861804 100644
--- a/linux-user/ppc/cpu_loop.c
+++ b/linux-user/ppc/cpu_loop.c
@@ -46,6 +46,11 @@ uint32_t cpu_ppc_load_atbu(CPUPPCState *env)
     return cpu_ppc_get_tb(env) >> 32;
 }
 
+uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
+{
+    return cpu_ppc_get_tb(env);
+}
+
 uint32_t cpu_ppc601_load_rtcu(CPUPPCState *env)
 __attribute__ (( alias ("cpu_ppc_load_tbu") ));
 
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index fe93cf0555..70167bae22 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1327,6 +1327,8 @@ uint64_t cpu_ppc_load_atbl (CPUPPCState *env);
 uint32_t cpu_ppc_load_atbu (CPUPPCState *env);
 void cpu_ppc_store_atbl (CPUPPCState *env, uint32_t value);
 void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value);
+uint64_t cpu_ppc_load_vtb(CPUPPCState *env);
+void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value);
 bool ppc_decr_clear_on_delivery(CPUPPCState *env);
 target_ulong cpu_ppc_load_decr (CPUPPCState *env);
 void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 69cbf7922f..3701bcbf1b 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -680,6 +680,7 @@ DEF_HELPER_FLAGS_1(load_tbl, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_tbu, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_atbl, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_atbu, TCG_CALL_NO_RWG, tl, env)
+DEF_HELPER_FLAGS_1(load_vtb, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_601_rtcl, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
 #if !defined(CONFIG_USER_ONLY)
@@ -700,6 +701,7 @@ DEF_HELPER_FLAGS_1(load_decr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_decr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_1(load_hdecr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_hdecr, TCG_CALL_NO_RWG, void, env, tl)
+DEF_HELPER_FLAGS_2(store_vtb, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_hid0_601, void, env, tl)
 DEF_HELPER_3(store_403_pbr, void, env, i32, tl)
 DEF_HELPER_FLAGS_1(load_40x_pit, TCG_CALL_NO_RWG, tl, env)
diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
index 73363e08ae..8c3c2fe67c 100644
--- a/target/ppc/timebase_helper.c
+++ b/target/ppc/timebase_helper.c
@@ -45,6 +45,11 @@ target_ulong helper_load_atbu(CPUPPCState *env)
     return cpu_ppc_load_atbu(env);
 }
 
+target_ulong helper_load_vtb(CPUPPCState *env)
+{
+    return cpu_ppc_load_vtb(env);
+}
+
 #if defined(TARGET_PPC64) && !defined(CONFIG_USER_ONLY)
 target_ulong helper_load_purr(CPUPPCState *env)
 {
@@ -113,6 +118,11 @@ void helper_store_hdecr(CPUPPCState *env, target_ulong val)
     cpu_ppc_store_hdecr(env, val);
 }
 
+void helper_store_vtb(CPUPPCState *env, target_ulong val)
+{
+    cpu_ppc_store_vtb(env, val);
+}
+
 target_ulong helper_load_40x_pit(CPUPPCState *env)
 {
     return load_40x_pit(env);
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 0bd555eb19..e3f941800b 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -310,6 +310,16 @@ static void spr_write_hdecr(DisasContext *ctx, int sprn, int gprn)
     }
 }
 
+static void spr_read_vtb(DisasContext *ctx, int gprn, int sprn)
+{
+    gen_helper_load_vtb(cpu_gpr[gprn], cpu_env);
+}
+
+static void spr_write_vtb(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_vtb(cpu_env, cpu_gpr[gprn]);
+}
+
 #endif
 #endif
 
@@ -8133,10 +8143,11 @@ static void gen_spr_power8_ebb(CPUPPCState *env)
 /* Virtual Time Base */
 static void gen_spr_vtb(CPUPPCState *env)
 {
-    spr_register_kvm(env, SPR_VTB, "VTB",
-                 SPR_NOACCESS, SPR_NOACCESS,
-                 &spr_read_tbl, SPR_NOACCESS,
-                 KVM_REG_PPC_VTB, 0x00000000);
+    spr_register_kvm_hv(env, SPR_VTB, "VTB",
+                        SPR_NOACCESS, SPR_NOACCESS,
+                        &spr_read_vtb, SPR_NOACCESS,
+                        &spr_read_vtb, &spr_write_vtb,
+                        KVM_REG_PPC_VTB, 0x00000000);
 }
 
 static void gen_spr_power8_fscr(CPUPPCState *env)
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 01/13] target/ppc: Implement the VTB for HV access
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The virtual timebase register (VTB) is a 64-bit register which
increments at the same rate as the timebase register, present on POWER8
and later processors.

The register is able to be read/written by the hypervisor and read by
the supervisor. All other accesses are illegal.

Currently the VTB is just an alias for the timebase (TB) register.

Implement the VTB so that is can be read/written independent of the TB.
Make use of the existing method for accessing timebase facilities where
by the compensation is stored and used to compute the value on reads/is
updated on writes.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/ppc.c                    | 16 ++++++++++++++++
 include/hw/ppc/ppc.h            |  1 +
 linux-user/ppc/cpu_loop.c       |  5 +++++
 target/ppc/cpu.h                |  2 ++
 target/ppc/helper.h             |  2 ++
 target/ppc/timebase_helper.c    | 10 ++++++++++
 target/ppc/translate_init.inc.c | 19 +++++++++++++++----
 7 files changed, 51 insertions(+), 4 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index b2ff99ec66..a57ca64626 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -694,6 +694,22 @@ void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value)
                      &tb_env->atb_offset, ((uint64_t)value << 32) | tb);
 }
 
+uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
+{
+    ppc_tb_t *tb_env = env->tb_env;
+
+    return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                          tb_env->vtb_offset);
+}
+
+void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
+{
+    ppc_tb_t *tb_env = env->tb_env;
+
+    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                     &tb_env->vtb_offset, value);
+}
+
 static void cpu_ppc_tb_stop (CPUPPCState *env)
 {
     ppc_tb_t *tb_env = env->tb_env;
diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
index 4bdcb8bacd..205150e6b4 100644
--- a/include/hw/ppc/ppc.h
+++ b/include/hw/ppc/ppc.h
@@ -23,6 +23,7 @@ struct ppc_tb_t {
     /* Time base management */
     int64_t  tb_offset;    /* Compensation                    */
     int64_t  atb_offset;   /* Compensation                    */
+    int64_t  vtb_offset;
     uint32_t tb_freq;      /* TB frequency                    */
     /* Decrementer management */
     uint64_t decr_next;    /* Tick for next decr interrupt    */
diff --git a/linux-user/ppc/cpu_loop.c b/linux-user/ppc/cpu_loop.c
index 801f5ace29..c715861804 100644
--- a/linux-user/ppc/cpu_loop.c
+++ b/linux-user/ppc/cpu_loop.c
@@ -46,6 +46,11 @@ uint32_t cpu_ppc_load_atbu(CPUPPCState *env)
     return cpu_ppc_get_tb(env) >> 32;
 }
 
+uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
+{
+    return cpu_ppc_get_tb(env);
+}
+
 uint32_t cpu_ppc601_load_rtcu(CPUPPCState *env)
 __attribute__ (( alias ("cpu_ppc_load_tbu") ));
 
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index fe93cf0555..70167bae22 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1327,6 +1327,8 @@ uint64_t cpu_ppc_load_atbl (CPUPPCState *env);
 uint32_t cpu_ppc_load_atbu (CPUPPCState *env);
 void cpu_ppc_store_atbl (CPUPPCState *env, uint32_t value);
 void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value);
+uint64_t cpu_ppc_load_vtb(CPUPPCState *env);
+void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value);
 bool ppc_decr_clear_on_delivery(CPUPPCState *env);
 target_ulong cpu_ppc_load_decr (CPUPPCState *env);
 void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 69cbf7922f..3701bcbf1b 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -680,6 +680,7 @@ DEF_HELPER_FLAGS_1(load_tbl, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_tbu, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_atbl, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_atbu, TCG_CALL_NO_RWG, tl, env)
+DEF_HELPER_FLAGS_1(load_vtb, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_601_rtcl, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
 #if !defined(CONFIG_USER_ONLY)
@@ -700,6 +701,7 @@ DEF_HELPER_FLAGS_1(load_decr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_decr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_1(load_hdecr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_hdecr, TCG_CALL_NO_RWG, void, env, tl)
+DEF_HELPER_FLAGS_2(store_vtb, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_hid0_601, void, env, tl)
 DEF_HELPER_3(store_403_pbr, void, env, i32, tl)
 DEF_HELPER_FLAGS_1(load_40x_pit, TCG_CALL_NO_RWG, tl, env)
diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
index 73363e08ae..8c3c2fe67c 100644
--- a/target/ppc/timebase_helper.c
+++ b/target/ppc/timebase_helper.c
@@ -45,6 +45,11 @@ target_ulong helper_load_atbu(CPUPPCState *env)
     return cpu_ppc_load_atbu(env);
 }
 
+target_ulong helper_load_vtb(CPUPPCState *env)
+{
+    return cpu_ppc_load_vtb(env);
+}
+
 #if defined(TARGET_PPC64) && !defined(CONFIG_USER_ONLY)
 target_ulong helper_load_purr(CPUPPCState *env)
 {
@@ -113,6 +118,11 @@ void helper_store_hdecr(CPUPPCState *env, target_ulong val)
     cpu_ppc_store_hdecr(env, val);
 }
 
+void helper_store_vtb(CPUPPCState *env, target_ulong val)
+{
+    cpu_ppc_store_vtb(env, val);
+}
+
 target_ulong helper_load_40x_pit(CPUPPCState *env)
 {
     return load_40x_pit(env);
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 0bd555eb19..e3f941800b 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -310,6 +310,16 @@ static void spr_write_hdecr(DisasContext *ctx, int sprn, int gprn)
     }
 }
 
+static void spr_read_vtb(DisasContext *ctx, int gprn, int sprn)
+{
+    gen_helper_load_vtb(cpu_gpr[gprn], cpu_env);
+}
+
+static void spr_write_vtb(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_vtb(cpu_env, cpu_gpr[gprn]);
+}
+
 #endif
 #endif
 
@@ -8133,10 +8143,11 @@ static void gen_spr_power8_ebb(CPUPPCState *env)
 /* Virtual Time Base */
 static void gen_spr_vtb(CPUPPCState *env)
 {
-    spr_register_kvm(env, SPR_VTB, "VTB",
-                 SPR_NOACCESS, SPR_NOACCESS,
-                 &spr_read_tbl, SPR_NOACCESS,
-                 KVM_REG_PPC_VTB, 0x00000000);
+    spr_register_kvm_hv(env, SPR_VTB, "VTB",
+                        SPR_NOACCESS, SPR_NOACCESS,
+                        &spr_read_vtb, SPR_NOACCESS,
+                        &spr_read_vtb, &spr_write_vtb,
+                        KVM_REG_PPC_VTB, 0x00000000);
 }
 
 static void gen_spr_power8_fscr(CPUPPCState *env)
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 02/13] target/ppc: Work [S]PURR implementation and add HV support
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The Processor Utilisation of Resources Register (PURR) and Scaled
Processor Utilisation of Resources Register (SPURR) provide an estimate
of the resources used by the thread, present on POWER7 and later
processors.

Currently the [S]PURR registers simply count at the rate of the
timebase.

Preserve this behaviour but rework the implementation to store an offset
like the timebase rather than doing the calculation manually. Also allow
hypervisor write access to the register along with the currently
available read access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/ppc.c                    | 17 +++++++----------
 include/hw/ppc/ppc.h            |  3 +--
 target/ppc/cpu.h                |  1 +
 target/ppc/helper.h             |  1 +
 target/ppc/timebase_helper.c    |  5 +++++
 target/ppc/translate_init.inc.c | 23 +++++++++++++++--------
 6 files changed, 30 insertions(+), 20 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index a57ca64626..b567156f97 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -819,12 +819,9 @@ target_ulong cpu_ppc_load_hdecr (CPUPPCState *env)
 uint64_t cpu_ppc_load_purr (CPUPPCState *env)
 {
     ppc_tb_t *tb_env = env->tb_env;
-    uint64_t diff;
 
-    diff = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - tb_env->purr_start;
-
-    return tb_env->purr_load +
-        muldiv64(diff, tb_env->tb_freq, NANOSECONDS_PER_SECOND);
+    return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                          tb_env->purr_offset);
 }
 
 /* When decrementer expires,
@@ -980,12 +977,12 @@ static void cpu_ppc_hdecr_cb(void *opaque)
     cpu_ppc_hdecr_excp(cpu);
 }
 
-static void cpu_ppc_store_purr(PowerPCCPU *cpu, uint64_t value)
+void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value)
 {
-    ppc_tb_t *tb_env = cpu->env.tb_env;
+    ppc_tb_t *tb_env = env->tb_env;
 
-    tb_env->purr_load = value;
-    tb_env->purr_start = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                     &tb_env->purr_offset, value);
 }
 
 static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
@@ -1002,7 +999,7 @@ static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
      */
     _cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
     _cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
-    cpu_ppc_store_purr(cpu, 0x0000000000000000ULL);
+    cpu_ppc_store_purr(env, 0x0000000000000000ULL);
 }
 
 static void timebase_save(PPCTimebase *tb)
diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
index 205150e6b4..b09ffbf300 100644
--- a/include/hw/ppc/ppc.h
+++ b/include/hw/ppc/ppc.h
@@ -32,8 +32,7 @@ struct ppc_tb_t {
     /* Hypervisor decrementer management */
     uint64_t hdecr_next;    /* Tick for next hdecr interrupt  */
     QEMUTimer *hdecr_timer;
-    uint64_t purr_load;
-    uint64_t purr_start;
+    int64_t purr_offset;
     void *opaque;
     uint32_t flags;
 };
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 70167bae22..19b3e1de0e 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1335,6 +1335,7 @@ void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
 target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
 void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
 uint64_t cpu_ppc_load_purr (CPUPPCState *env);
+void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
 uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
 uint32_t cpu_ppc601_load_rtcu (CPUPPCState *env);
 #if !defined(CONFIG_USER_ONLY)
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 3701bcbf1b..336e7802fb 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -686,6 +686,7 @@ DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
 #if !defined(CONFIG_USER_ONLY)
 #if defined(TARGET_PPC64)
 DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
+DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_ptcr, void, env, tl)
 #endif
 DEF_HELPER_2(store_sdr1, void, env, tl)
diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
index 8c3c2fe67c..2395295b77 100644
--- a/target/ppc/timebase_helper.c
+++ b/target/ppc/timebase_helper.c
@@ -55,6 +55,11 @@ target_ulong helper_load_purr(CPUPPCState *env)
 {
     return (target_ulong)cpu_ppc_load_purr(env);
 }
+
+void helper_store_purr(CPUPPCState *env, target_ulong val)
+{
+    cpu_ppc_store_purr(env, val);
+}
 #endif
 
 target_ulong helper_load_601_rtcl(CPUPPCState *env)
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index e3f941800b..9cd33e79ef 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -285,6 +285,11 @@ static void spr_read_purr(DisasContext *ctx, int gprn, int sprn)
     gen_helper_load_purr(cpu_gpr[gprn], cpu_env);
 }
 
+static void spr_write_purr(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_purr(cpu_env, cpu_gpr[gprn]);
+}
+
 /* HDECR */
 static void spr_read_hdecr(DisasContext *ctx, int gprn, int sprn)
 {
@@ -7972,14 +7977,16 @@ static void gen_spr_book3s_purr(CPUPPCState *env)
 {
 #if !defined(CONFIG_USER_ONLY)
     /* PURR & SPURR: Hack - treat these as aliases for the TB for now */
-    spr_register_kvm(env, SPR_PURR,   "PURR",
-                     &spr_read_purr, SPR_NOACCESS,
-                     &spr_read_purr, SPR_NOACCESS,
-                     KVM_REG_PPC_PURR, 0x00000000);
-    spr_register_kvm(env, SPR_SPURR,   "SPURR",
-                     &spr_read_purr, SPR_NOACCESS,
-                     &spr_read_purr, SPR_NOACCESS,
-                     KVM_REG_PPC_SPURR, 0x00000000);
+    spr_register_kvm_hv(env, SPR_PURR,   "PURR",
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, &spr_write_purr,
+                        KVM_REG_PPC_PURR, 0x00000000);
+    spr_register_kvm_hv(env, SPR_SPURR,   "SPURR",
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, &spr_write_purr,
+                        KVM_REG_PPC_SPURR, 0x00000000);
 #endif
 }
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 02/13] target/ppc: Work [S]PURR implementation and add HV support
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The Processor Utilisation of Resources Register (PURR) and Scaled
Processor Utilisation of Resources Register (SPURR) provide an estimate
of the resources used by the thread, present on POWER7 and later
processors.

Currently the [S]PURR registers simply count at the rate of the
timebase.

Preserve this behaviour but rework the implementation to store an offset
like the timebase rather than doing the calculation manually. Also allow
hypervisor write access to the register along with the currently
available read access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/ppc.c                    | 17 +++++++----------
 include/hw/ppc/ppc.h            |  3 +--
 target/ppc/cpu.h                |  1 +
 target/ppc/helper.h             |  1 +
 target/ppc/timebase_helper.c    |  5 +++++
 target/ppc/translate_init.inc.c | 23 +++++++++++++++--------
 6 files changed, 30 insertions(+), 20 deletions(-)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index a57ca64626..b567156f97 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -819,12 +819,9 @@ target_ulong cpu_ppc_load_hdecr (CPUPPCState *env)
 uint64_t cpu_ppc_load_purr (CPUPPCState *env)
 {
     ppc_tb_t *tb_env = env->tb_env;
-    uint64_t diff;
 
-    diff = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - tb_env->purr_start;
-
-    return tb_env->purr_load +
-        muldiv64(diff, tb_env->tb_freq, NANOSECONDS_PER_SECOND);
+    return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                          tb_env->purr_offset);
 }
 
 /* When decrementer expires,
@@ -980,12 +977,12 @@ static void cpu_ppc_hdecr_cb(void *opaque)
     cpu_ppc_hdecr_excp(cpu);
 }
 
-static void cpu_ppc_store_purr(PowerPCCPU *cpu, uint64_t value)
+void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value)
 {
-    ppc_tb_t *tb_env = cpu->env.tb_env;
+    ppc_tb_t *tb_env = env->tb_env;
 
-    tb_env->purr_load = value;
-    tb_env->purr_start = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                     &tb_env->purr_offset, value);
 }
 
 static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
@@ -1002,7 +999,7 @@ static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
      */
     _cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
     _cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
-    cpu_ppc_store_purr(cpu, 0x0000000000000000ULL);
+    cpu_ppc_store_purr(env, 0x0000000000000000ULL);
 }
 
 static void timebase_save(PPCTimebase *tb)
diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
index 205150e6b4..b09ffbf300 100644
--- a/include/hw/ppc/ppc.h
+++ b/include/hw/ppc/ppc.h
@@ -32,8 +32,7 @@ struct ppc_tb_t {
     /* Hypervisor decrementer management */
     uint64_t hdecr_next;    /* Tick for next hdecr interrupt  */
     QEMUTimer *hdecr_timer;
-    uint64_t purr_load;
-    uint64_t purr_start;
+    int64_t purr_offset;
     void *opaque;
     uint32_t flags;
 };
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 70167bae22..19b3e1de0e 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1335,6 +1335,7 @@ void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
 target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
 void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
 uint64_t cpu_ppc_load_purr (CPUPPCState *env);
+void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
 uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
 uint32_t cpu_ppc601_load_rtcu (CPUPPCState *env);
 #if !defined(CONFIG_USER_ONLY)
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 3701bcbf1b..336e7802fb 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -686,6 +686,7 @@ DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
 #if !defined(CONFIG_USER_ONLY)
 #if defined(TARGET_PPC64)
 DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
+DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_ptcr, void, env, tl)
 #endif
 DEF_HELPER_2(store_sdr1, void, env, tl)
diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
index 8c3c2fe67c..2395295b77 100644
--- a/target/ppc/timebase_helper.c
+++ b/target/ppc/timebase_helper.c
@@ -55,6 +55,11 @@ target_ulong helper_load_purr(CPUPPCState *env)
 {
     return (target_ulong)cpu_ppc_load_purr(env);
 }
+
+void helper_store_purr(CPUPPCState *env, target_ulong val)
+{
+    cpu_ppc_store_purr(env, val);
+}
 #endif
 
 target_ulong helper_load_601_rtcl(CPUPPCState *env)
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index e3f941800b..9cd33e79ef 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -285,6 +285,11 @@ static void spr_read_purr(DisasContext *ctx, int gprn, int sprn)
     gen_helper_load_purr(cpu_gpr[gprn], cpu_env);
 }
 
+static void spr_write_purr(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_purr(cpu_env, cpu_gpr[gprn]);
+}
+
 /* HDECR */
 static void spr_read_hdecr(DisasContext *ctx, int gprn, int sprn)
 {
@@ -7972,14 +7977,16 @@ static void gen_spr_book3s_purr(CPUPPCState *env)
 {
 #if !defined(CONFIG_USER_ONLY)
     /* PURR & SPURR: Hack - treat these as aliases for the TB for now */
-    spr_register_kvm(env, SPR_PURR,   "PURR",
-                     &spr_read_purr, SPR_NOACCESS,
-                     &spr_read_purr, SPR_NOACCESS,
-                     KVM_REG_PPC_PURR, 0x00000000);
-    spr_register_kvm(env, SPR_SPURR,   "SPURR",
-                     &spr_read_purr, SPR_NOACCESS,
-                     &spr_read_purr, SPR_NOACCESS,
-                     KVM_REG_PPC_SPURR, 0x00000000);
+    spr_register_kvm_hv(env, SPR_PURR,   "PURR",
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, &spr_write_purr,
+                        KVM_REG_PPC_PURR, 0x00000000);
+    spr_register_kvm_hv(env, SPR_SPURR,   "SPURR",
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, SPR_NOACCESS,
+                        &spr_read_purr, &spr_write_purr,
+                        KVM_REG_PPC_SPURR, 0x00000000);
 #endif
 }
 
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 03/13] target/ppc: Add SPR ASDR
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The Access Segment Descriptor Register (ASDR) provides information about
the storage element when taking a hypervisor storage interrupt. When
performing nested radix address translation, this is normally the guest
real address. This register is present on POWER9 processors and later.

Implement the ADSR, note read and write access is limited to the
hypervisor.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/cpu.h                | 1 +
 target/ppc/translate_init.inc.c | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 19b3e1de0e..8d66265e5a 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1797,6 +1797,7 @@ void ppc_compat_add_property(Object *obj, const char *name,
 #define SPR_MPC_MD_DBRAM1     (0x32A)
 #define SPR_RCPU_L2U_RA3      (0x32B)
 #define SPR_TAR               (0x32F)
+#define SPR_ASDR              (0x330)
 #define SPR_IC                (0x350)
 #define SPR_VTB               (0x351)
 #define SPR_MMCRC             (0x353)
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 9cd33e79ef..a0cae58e19 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8243,6 +8243,12 @@ static void gen_spr_power9_mmu(CPUPPCState *env)
                         SPR_NOACCESS, SPR_NOACCESS,
                         &spr_read_generic, &spr_write_ptcr,
                         KVM_REG_PPC_PTCR, 0x00000000);
+    /* Address Segment Descriptor Register */
+    spr_register_hv(env, SPR_ASDR, "ASDR",
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    &spr_read_generic, &spr_write_generic,
+                    0x0000000000000000);
 #endif
 }
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 03/13] target/ppc: Add SPR ASDR
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The Access Segment Descriptor Register (ASDR) provides information about
the storage element when taking a hypervisor storage interrupt. When
performing nested radix address translation, this is normally the guest
real address. This register is present on POWER9 processors and later.

Implement the ADSR, note read and write access is limited to the
hypervisor.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/cpu.h                | 1 +
 target/ppc/translate_init.inc.c | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 19b3e1de0e..8d66265e5a 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1797,6 +1797,7 @@ void ppc_compat_add_property(Object *obj, const char *name,
 #define SPR_MPC_MD_DBRAM1     (0x32A)
 #define SPR_RCPU_L2U_RA3      (0x32B)
 #define SPR_TAR               (0x32F)
+#define SPR_ASDR              (0x330)
 #define SPR_IC                (0x350)
 #define SPR_VTB               (0x351)
 #define SPR_MMCRC             (0x353)
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 9cd33e79ef..a0cae58e19 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8243,6 +8243,12 @@ static void gen_spr_power9_mmu(CPUPPCState *env)
                         SPR_NOACCESS, SPR_NOACCESS,
                         &spr_read_generic, &spr_write_ptcr,
                         KVM_REG_PPC_PTCR, 0x00000000);
+    /* Address Segment Descriptor Register */
+    spr_register_hv(env, SPR_ASDR, "ASDR",
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    &spr_read_generic, &spr_write_generic,
+                    0x0000000000000000);
 #endif
 }
 
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 04/13] target/ppc: Add SPR TBU40
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The spr TBU40 is used to set the upper 40 bits of the timebase
register, present on POWER5+ and later processors.

This register can only be written by the hypervisor, and cannot be read.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/ppc.c                    | 13 +++++++++++++
 target/ppc/cpu.h                |  1 +
 target/ppc/helper.h             |  1 +
 target/ppc/timebase_helper.c    |  5 +++++
 target/ppc/translate_init.inc.c | 19 +++++++++++++++++++
 5 files changed, 39 insertions(+)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index b567156f97..b618c6f615 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -710,6 +710,19 @@ void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
                      &tb_env->vtb_offset, value);
 }
 
+void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value)
+{
+    ppc_tb_t *tb_env = env->tb_env;
+    uint64_t tb;
+
+    tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                        tb_env->tb_offset);
+    tb &= 0xFFFFFFUL;
+    tb |= (value & ~0xFFFFFFUL);
+    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                     &tb_env->tb_offset, tb);
+}
+
 static void cpu_ppc_tb_stop (CPUPPCState *env)
 {
     ppc_tb_t *tb_env = env->tb_env;
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 8d66265e5a..e324064111 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1334,6 +1334,7 @@ target_ulong cpu_ppc_load_decr (CPUPPCState *env);
 void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
 target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
 void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
+void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value);
 uint64_t cpu_ppc_load_purr (CPUPPCState *env);
 void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
 uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 336e7802fb..6aee195528 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -703,6 +703,7 @@ DEF_HELPER_FLAGS_2(store_decr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_1(load_hdecr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_hdecr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_2(store_vtb, TCG_CALL_NO_RWG, void, env, tl)
+DEF_HELPER_FLAGS_2(store_tbu40, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_hid0_601, void, env, tl)
 DEF_HELPER_3(store_403_pbr, void, env, i32, tl)
 DEF_HELPER_FLAGS_1(load_40x_pit, TCG_CALL_NO_RWG, tl, env)
diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
index 2395295b77..703bd9ed18 100644
--- a/target/ppc/timebase_helper.c
+++ b/target/ppc/timebase_helper.c
@@ -128,6 +128,11 @@ void helper_store_vtb(CPUPPCState *env, target_ulong val)
     cpu_ppc_store_vtb(env, val);
 }
 
+void helper_store_tbu40(CPUPPCState *env, target_ulong val)
+{
+    cpu_ppc_store_tbu40(env, val);
+}
+
 target_ulong helper_load_40x_pit(CPUPPCState *env)
 {
     return load_40x_pit(env);
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index a0cae58e19..8e287066e5 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -325,6 +325,11 @@ static void spr_write_vtb(DisasContext *ctx, int sprn, int gprn)
     gen_helper_store_vtb(cpu_env, cpu_gpr[gprn]);
 }
 
+static void spr_write_tbu40(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_tbu40(cpu_env, cpu_gpr[gprn]);
+}
+
 #endif
 #endif
 
@@ -7812,6 +7817,16 @@ static void gen_spr_power5p_ear(CPUPPCState *env)
                  0x00000000);
 }
 
+static void gen_spr_power5p_tb(CPUPPCState *env)
+{
+    /* TBU40 (High 40 bits of the Timebase register */
+    spr_register_hv(env, SPR_TBU40, "TBU40",
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    SPR_NOACCESS, &spr_write_tbu40,
+                    0x00000000);
+}
+
 #if !defined(CONFIG_USER_ONLY)
 static void spr_write_hmer(DisasContext *ctx, int sprn, int gprn)
 {
@@ -8352,6 +8367,7 @@ static void init_proc_power5plus(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
 
     /* env variables */
     env->dcache_line_size = 128;
@@ -8464,6 +8480,7 @@ static void init_proc_POWER7(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
     gen_spr_power6_common(env);
     gen_spr_power6_dbg(env);
     gen_spr_power7_book4(env);
@@ -8605,6 +8622,7 @@ static void init_proc_POWER8(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
     gen_spr_power6_common(env);
     gen_spr_power6_dbg(env);
     gen_spr_power8_tce_address_control(env);
@@ -8793,6 +8811,7 @@ static void init_proc_POWER9(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
     gen_spr_power6_common(env);
     gen_spr_power6_dbg(env);
     gen_spr_power8_tce_address_control(env);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 04/13] target/ppc: Add SPR TBU40
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The spr TBU40 is used to set the upper 40 bits of the timebase
register, present on POWER5+ and later processors.

This register can only be written by the hypervisor, and cannot be read.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/ppc.c                    | 13 +++++++++++++
 target/ppc/cpu.h                |  1 +
 target/ppc/helper.h             |  1 +
 target/ppc/timebase_helper.c    |  5 +++++
 target/ppc/translate_init.inc.c | 19 +++++++++++++++++++
 5 files changed, 39 insertions(+)

diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index b567156f97..b618c6f615 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -710,6 +710,19 @@ void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
                      &tb_env->vtb_offset, value);
 }
 
+void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value)
+{
+    ppc_tb_t *tb_env = env->tb_env;
+    uint64_t tb;
+
+    tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                        tb_env->tb_offset);
+    tb &= 0xFFFFFFUL;
+    tb |= (value & ~0xFFFFFFUL);
+    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+                     &tb_env->tb_offset, tb);
+}
+
 static void cpu_ppc_tb_stop (CPUPPCState *env)
 {
     ppc_tb_t *tb_env = env->tb_env;
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 8d66265e5a..e324064111 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1334,6 +1334,7 @@ target_ulong cpu_ppc_load_decr (CPUPPCState *env);
 void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
 target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
 void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
+void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value);
 uint64_t cpu_ppc_load_purr (CPUPPCState *env);
 void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
 uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 336e7802fb..6aee195528 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -703,6 +703,7 @@ DEF_HELPER_FLAGS_2(store_decr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_1(load_hdecr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_hdecr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_2(store_vtb, TCG_CALL_NO_RWG, void, env, tl)
+DEF_HELPER_FLAGS_2(store_tbu40, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_hid0_601, void, env, tl)
 DEF_HELPER_3(store_403_pbr, void, env, i32, tl)
 DEF_HELPER_FLAGS_1(load_40x_pit, TCG_CALL_NO_RWG, tl, env)
diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
index 2395295b77..703bd9ed18 100644
--- a/target/ppc/timebase_helper.c
+++ b/target/ppc/timebase_helper.c
@@ -128,6 +128,11 @@ void helper_store_vtb(CPUPPCState *env, target_ulong val)
     cpu_ppc_store_vtb(env, val);
 }
 
+void helper_store_tbu40(CPUPPCState *env, target_ulong val)
+{
+    cpu_ppc_store_tbu40(env, val);
+}
+
 target_ulong helper_load_40x_pit(CPUPPCState *env)
 {
     return load_40x_pit(env);
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index a0cae58e19..8e287066e5 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -325,6 +325,11 @@ static void spr_write_vtb(DisasContext *ctx, int sprn, int gprn)
     gen_helper_store_vtb(cpu_env, cpu_gpr[gprn]);
 }
 
+static void spr_write_tbu40(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_tbu40(cpu_env, cpu_gpr[gprn]);
+}
+
 #endif
 #endif
 
@@ -7812,6 +7817,16 @@ static void gen_spr_power5p_ear(CPUPPCState *env)
                  0x00000000);
 }
 
+static void gen_spr_power5p_tb(CPUPPCState *env)
+{
+    /* TBU40 (High 40 bits of the Timebase register */
+    spr_register_hv(env, SPR_TBU40, "TBU40",
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    SPR_NOACCESS, SPR_NOACCESS,
+                    SPR_NOACCESS, &spr_write_tbu40,
+                    0x00000000);
+}
+
 #if !defined(CONFIG_USER_ONLY)
 static void spr_write_hmer(DisasContext *ctx, int sprn, int gprn)
 {
@@ -8352,6 +8367,7 @@ static void init_proc_power5plus(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
 
     /* env variables */
     env->dcache_line_size = 128;
@@ -8464,6 +8480,7 @@ static void init_proc_POWER7(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
     gen_spr_power6_common(env);
     gen_spr_power6_dbg(env);
     gen_spr_power7_book4(env);
@@ -8605,6 +8622,7 @@ static void init_proc_POWER8(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
     gen_spr_power6_common(env);
     gen_spr_power6_dbg(env);
     gen_spr_power8_tce_address_control(env);
@@ -8793,6 +8811,7 @@ static void init_proc_POWER9(CPUPPCState *env)
     gen_spr_power5p_common(env);
     gen_spr_power5p_lpar(env);
     gen_spr_power5p_ear(env);
+    gen_spr_power5p_tb(env);
     gen_spr_power6_common(env);
     gen_spr_power6_dbg(env);
     gen_spr_power8_tce_address_control(env);
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 05/13] target/ppc: Add privileged message send facilities
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

Privileged message send facilities exist on POWER8 processors and later
and include a register and instructions which can be used to generate,
observe/modify the state of and clear privileged doorbell exceptions as
described below.

The Directed Privileged Doorbell Exception State (DPDES) register
reflects the state of pending privileged doorbell exceptions and can
also be used to modify that state. The register can be used to read and
modify the state of privileged doorbell exceptions for all threads of a
subprocessor and thus is a shared facility for that subprocessor. The
register can be read/written by the hypervisor and read by the
supervisor if enabled in the HFSCR, otherwise a hypervisor facility
unavailable exception is generated.

The privileged message send and clear instructions (msgsndp & msgclrp)
are used to generate and clear the presence of a directed privileged
doorbell exception, respectively. The msgsndp instruction can be used to
target any thread of the current subprocessor, msgclrp acts on the
thread issuing the instruction. These instructions are privileged, but
will generate a hypervisor facility unavailable exception if not enabled
in the HFSCR and executed in privileged non-hypervisor state.

Add and implement this register and instructions by reading or modifying the
pending interrupt state of the cpu.

Note that TCG only supports one thread per core and so we only need to
worry about the cpu making the access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/cpu.h                |  7 +++++
 target/ppc/excp_helper.c        | 63 +++++++++++++++++++++++++++++++++++++----
 target/ppc/helper.h             |  5 ++++
 target/ppc/misc_helper.c        | 46 ++++++++++++++++++++++++++++++
 target/ppc/translate.c          | 28 ++++++++++++++++++
 target/ppc/translate_init.inc.c | 40 ++++++++++++++++++++++++++
 6 files changed, 184 insertions(+), 5 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index e324064111..1d2a088391 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -425,6 +425,10 @@ typedef struct ppc_v3_pate_t {
 #define PSSCR_ESL         PPC_BIT(42) /* Enable State Loss */
 #define PSSCR_EC          PPC_BIT(43) /* Exit Criterion */
 
+/* HFSCR bits */
+#define HFSCR_MSGSNDP     PPC_BIT(53) /* Privileged Message Send Facilities */
+#define HFSCR_IC_MSGSNDP  0xA
+
 #define msr_sf   ((env->msr >> MSR_SF)   & 1)
 #define msr_isf  ((env->msr >> MSR_ISF)  & 1)
 #define msr_shv  ((env->msr >> MSR_SHV)  & 1)
@@ -1355,6 +1359,8 @@ void cpu_ppc_set_vhyp(PowerPCCPU *cpu, PPCVirtualHypervisor *vhyp);
 #endif
 
 void store_fpscr(CPUPPCState *env, uint64_t arg, uint32_t mask);
+void gen_hfscr_facility_check(DisasContext *ctx, int facility_sprn, int bit,
+                              int sprn, int cause);
 
 static inline uint64_t ppc_dump_gpr(CPUPPCState *env, int gprn)
 {
@@ -1501,6 +1507,7 @@ void ppc_compat_add_property(Object *obj, const char *name,
 #define SPR_MPC_ICTRL         (0x09E)
 #define SPR_MPC_BAR           (0x09F)
 #define SPR_PSPB              (0x09F)
+#define SPR_DPDES             (0x0B0)
 #define SPR_DAWR              (0x0B4)
 #define SPR_RPR               (0x0BA)
 #define SPR_CIABR             (0x0BB)
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index beafcf1ebd..7a4da7bdba 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -461,6 +461,13 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
         env->spr[SPR_FSCR] |= ((target_ulong)env->error_code << 56);
 #endif
         break;
+    case POWERPC_EXCP_HV_FU:     /* Hypervisor Facility Unavailable Exception */
+        env->spr[SPR_HFSCR] |= ((target_ulong)env->error_code << FSCR_IC_POS);
+        srr0 = SPR_HSRR0;
+        srr1 = SPR_HSRR1;
+        new_msr |= (target_ulong)MSR_HVB;
+        new_msr |= env->msr & ((target_ulong)1 << MSR_RI);
+        break;
     case POWERPC_EXCP_PIT:       /* Programmable interval timer interrupt    */
         LOG_EXCP("PIT exception\n");
         break;
@@ -884,7 +891,11 @@ static void ppc_hw_interrupt(CPUPPCState *env)
         }
         if (env->pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL)) {
             env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
-            powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DOORI);
+            if (env->insns_flags & PPC_SEGMENT_64B) {
+                powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_SDOOR);
+            } else {
+                powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DOORI);
+            }
             return;
         }
         if (env->pending_interrupts & (1 << PPC_INTERRUPT_HDOORBELL)) {
@@ -1202,19 +1213,26 @@ void helper_msgsnd(target_ulong rb)
 }
 
 /* Server Processor Control */
-static int book3s_dbell2irq(target_ulong rb)
+static int book3s_dbell2irq(target_ulong rb, bool hv_dbell)
 {
     int msg = rb & DBELL_TYPE_MASK;
 
     /* A Directed Hypervisor Doorbell message is sent only if the
      * message type is 5. All other types are reserved and the
      * instruction is a no-op */
-    return msg == DBELL_TYPE_DBELL_SERVER ? PPC_INTERRUPT_HDOORBELL : -1;
+    if (msg == DBELL_TYPE_DBELL_SERVER) {
+        if (hv_dbell)
+            return PPC_INTERRUPT_HDOORBELL;
+        else
+            return PPC_INTERRUPT_DOORBELL;
+    }
+
+    return -1;
 }
 
 void helper_book3s_msgclr(CPUPPCState *env, target_ulong rb)
 {
-    int irq = book3s_dbell2irq(rb);
+    int irq = book3s_dbell2irq(rb, 1);
 
     if (irq < 0) {
         return;
@@ -1225,7 +1243,42 @@ void helper_book3s_msgclr(CPUPPCState *env, target_ulong rb)
 
 void helper_book3s_msgsnd(target_ulong rb)
 {
-    int irq = book3s_dbell2irq(rb);
+    int irq = book3s_dbell2irq(rb, 1);
+    int pir = rb & DBELL_PROCIDTAG_MASK;
+    CPUState *cs;
+
+    if (irq < 0) {
+        return;
+    }
+
+    qemu_mutex_lock_iothread();
+    CPU_FOREACH(cs) {
+        PowerPCCPU *cpu = POWERPC_CPU(cs);
+        CPUPPCState *cenv = &cpu->env;
+
+        /* TODO: broadcast message to all threads of the same  processor */
+        if (cenv->spr_cb[SPR_PIR].default_value == pir) {
+            cenv->pending_interrupts |= 1 << irq;
+            cpu_interrupt(cs, CPU_INTERRUPT_HARD);
+        }
+    }
+    qemu_mutex_unlock_iothread();
+}
+
+void helper_book3s_msgclrp(CPUPPCState *env, target_ulong rb)
+{
+    int irq = book3s_dbell2irq(rb, 0);
+
+    if (irq < 0) {
+        return;
+    }
+
+    env->pending_interrupts &= ~(1 << irq);
+}
+
+void helper_book3s_msgsndp(target_ulong rb)
+{
+    int irq = book3s_dbell2irq(rb, 0);
     int pir = rb & DBELL_PROCIDTAG_MASK;
     CPUState *cs;
 
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 6aee195528..040f59d1af 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -657,6 +657,8 @@ DEF_HELPER_1(msgsnd, void, tl)
 DEF_HELPER_2(msgclr, void, env, tl)
 DEF_HELPER_1(book3s_msgsnd, void, tl)
 DEF_HELPER_2(book3s_msgclr, void, env, tl)
+DEF_HELPER_1(book3s_msgsndp, void, tl)
+DEF_HELPER_2(book3s_msgclrp, void, env, tl)
 #endif
 
 DEF_HELPER_4(dlmzb, tl, env, tl, tl, i32)
@@ -674,6 +676,7 @@ DEF_HELPER_3(store_dcr, void, env, tl, tl)
 
 DEF_HELPER_2(load_dump_spr, void, env, i32)
 DEF_HELPER_2(store_dump_spr, void, env, i32)
+DEF_HELPER_4(hfscr_facility_check, void, env, i32, i32, i32)
 DEF_HELPER_4(fscr_facility_check, void, env, i32, i32, i32)
 DEF_HELPER_4(msr_facility_check, void, env, i32, i32, i32)
 DEF_HELPER_FLAGS_1(load_tbl, TCG_CALL_NO_RWG, tl, env)
@@ -688,6 +691,8 @@ DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_ptcr, void, env, tl)
+DEF_HELPER_FLAGS_1(load_dpdes, TCG_CALL_NO_RWG, tl, env)
+DEF_HELPER_FLAGS_2(store_dpdes, TCG_CALL_NO_RWG, void, env, tl)
 #endif
 DEF_HELPER_2(store_sdr1, void, env, tl)
 DEF_HELPER_2(store_pidr, void, env, tl)
diff --git a/target/ppc/misc_helper.c b/target/ppc/misc_helper.c
index c65d1ade15..d7d4acca7f 100644
--- a/target/ppc/misc_helper.c
+++ b/target/ppc/misc_helper.c
@@ -39,6 +39,17 @@ void helper_store_dump_spr(CPUPPCState *env, uint32_t sprn)
 }
 
 #ifdef TARGET_PPC64
+static void raise_hv_fu_exception(CPUPPCState *env, uint32_t bit,
+                                  uint32_t sprn, uint32_t cause,
+                                  uintptr_t raddr)
+{
+    qemu_log("Facility SPR %d is unavailable (SPR HFSCR:%d)\n", sprn, bit);
+
+    env->spr[SPR_HFSCR] &= ~((target_ulong)FSCR_IC_MASK << FSCR_IC_POS);
+
+    raise_exception_err_ra(env, POWERPC_EXCP_HV_FU, cause, raddr);
+}
+
 static void raise_fu_exception(CPUPPCState *env, uint32_t bit,
                                uint32_t sprn, uint32_t cause,
                                uintptr_t raddr)
@@ -53,6 +64,17 @@ static void raise_fu_exception(CPUPPCState *env, uint32_t bit,
 }
 #endif
 
+void helper_hfscr_facility_check(CPUPPCState *env, uint32_t bit,
+                                 uint32_t sprn, uint32_t cause)
+{
+#ifdef TARGET_PPC64
+    if ((env->msr_mask & MSR_HVB) && !msr_hv &&
+                                     !(env->spr[SPR_HFSCR] & (1UL << bit))) {
+        raise_hv_fu_exception(env, bit, sprn, cause, GETPC());
+    }
+#endif
+}
+
 void helper_fscr_facility_check(CPUPPCState *env, uint32_t bit,
                                 uint32_t sprn, uint32_t cause)
 {
@@ -107,6 +129,30 @@ void helper_store_pcr(CPUPPCState *env, target_ulong value)
 
     env->spr[SPR_PCR] = value & pcc->pcr_mask;
 }
+
+target_ulong helper_load_dpdes(CPUPPCState *env)
+{
+    helper_hfscr_facility_check(env, HFSCR_MSGSNDP, SPR_DPDES,
+                                HFSCR_IC_MSGSNDP);
+
+    if (env->pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL))
+        return 1;
+    return 0;
+}
+
+void helper_store_dpdes(CPUPPCState *env, target_ulong val)
+{
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
+    CPUState *cs = CPU(cpu);
+
+    if (val) {
+        /* Only one cpu for now */
+        env->pending_interrupts |= 1 << PPC_INTERRUPT_DOORBELL;
+        cpu_interrupt(cs, CPU_INTERRUPT_HARD);
+    } else {
+        env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
+    }
+}
 #endif /* defined(TARGET_PPC64) */
 
 void helper_store_pidr(CPUPPCState *env, target_ulong val)
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index fb42585a1c..2c3e83d18e 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -6537,6 +6537,30 @@ static void gen_msgsnd(DisasContext *ctx)
 #endif /* defined(CONFIG_USER_ONLY) */
 }
 
+static void gen_msgclrp(DisasContext *ctx)
+{
+#if defined(CONFIG_USER_ONLY)
+    GEN_PRIV;
+#else
+    CHK_SV;
+    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, 0,
+                             HFSCR_IC_MSGSNDP);
+    gen_helper_book3s_msgclrp(cpu_env, cpu_gpr[rB(ctx->opcode)]);
+#endif /* defined(CONFIG_USER_ONLY) */
+}
+
+static void gen_msgsndp(DisasContext *ctx)
+{
+#if defined(CONFIG_USER_ONLY)
+    GEN_PRIV;
+#else
+    CHK_SV;
+    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, 0,
+                             HFSCR_IC_MSGSNDP);
+    gen_helper_book3s_msgsndp(cpu_gpr[rB(ctx->opcode)]);
+#endif /* defined(CONFIG_USER_ONLY) */
+}
+
 static void gen_msgsync(DisasContext *ctx)
 {
 #if defined(CONFIG_USER_ONLY)
@@ -7054,6 +7078,10 @@ GEN_HANDLER2_E(msgclr, "msgclr", 0x1F, 0x0E, 0x07, 0x03ff0001,
                PPC_NONE, PPC2_PRCNTL),
 GEN_HANDLER2_E(msgsync, "msgsync", 0x1F, 0x16, 0x1B, 0x00000000,
                PPC_NONE, PPC2_PRCNTL),
+GEN_HANDLER2_E(msgsndp, "msgsndp", 0x1F, 0x0E, 0x04, 0x03ff0001,
+               PPC_NONE, PPC2_ISA207S),
+GEN_HANDLER2_E(msgclrp, "msgclrp", 0x1F, 0x0E, 0x05, 0x03ff0001,
+               PPC_NONE, PPC2_ISA207S),
 GEN_HANDLER(wrtee, 0x1F, 0x03, 0x04, 0x000FFC01, PPC_WRTEE),
 GEN_HANDLER(wrteei, 0x1F, 0x03, 0x05, 0x000E7C01, PPC_WRTEE),
 GEN_HANDLER(dlmzb, 0x1F, 0x0E, 0x02, 0x00000000, PPC_440_SPEC),
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 8e287066e5..46f9399097 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -454,6 +454,19 @@ static void spr_write_pcr(DisasContext *ctx, int sprn, int gprn)
 {
     gen_helper_store_pcr(cpu_env, cpu_gpr[gprn]);
 }
+
+/* DPDES */
+static void spr_read_dpdes(DisasContext *ctx, int gprn, int sprn)
+{
+    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, sprn,
+                             HFSCR_IC_MSGSNDP);
+    gen_helper_load_dpdes(cpu_gpr[gprn], cpu_env);
+}
+
+static void spr_write_dpdes(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_dpdes(cpu_env, cpu_gpr[gprn]);
+}
 #endif
 #endif
 
@@ -7478,6 +7491,20 @@ POWERPC_FAMILY(e600)(ObjectClass *oc, void *data)
 #define POWERPC970_HID5_INIT 0x00000000
 #endif
 
+void gen_hfscr_facility_check(DisasContext *ctx, int facility_sprn, int bit,
+                              int sprn, int cause)
+{
+    TCGv_i32 t1 = tcg_const_i32(bit);
+    TCGv_i32 t2 = tcg_const_i32(sprn);
+    TCGv_i32 t3 = tcg_const_i32(cause);
+
+    gen_helper_hfscr_facility_check(cpu_env, t1, t2, t3);
+
+    tcg_temp_free_i32(t3);
+    tcg_temp_free_i32(t2);
+    tcg_temp_free_i32(t1);
+}
+
 static void gen_fscr_facility_check(DisasContext *ctx, int facility_sprn,
                                     int bit, int sprn, int cause)
 {
@@ -8249,6 +8276,17 @@ static void gen_spr_power8_rpr(CPUPPCState *env)
 #endif
 }
 
+static void gen_spr_power8_dpdes(CPUPPCState *env)
+{
+#if !defined(CONFIG_USER_ONLY)
+    spr_register_kvm_hv(env, SPR_DPDES, "DPDES",
+                        SPR_NOACCESS, SPR_NOACCESS,
+                        &spr_read_dpdes, SPR_NOACCESS,
+                        &spr_read_dpdes, &spr_write_dpdes,
+                        KVM_REG_PPC_DPDES, 0x0UL);
+#endif
+}
+
 static void gen_spr_power9_mmu(CPUPPCState *env)
 {
 #if !defined(CONFIG_USER_ONLY)
@@ -8637,6 +8675,7 @@ static void init_proc_POWER8(CPUPPCState *env)
     gen_spr_power8_ic(env);
     gen_spr_power8_book4(env);
     gen_spr_power8_rpr(env);
+    gen_spr_power8_dpdes(env);
 
     /* env variables */
     env->dcache_line_size = 128;
@@ -8826,6 +8865,7 @@ static void init_proc_POWER9(CPUPPCState *env)
     gen_spr_power8_ic(env);
     gen_spr_power8_book4(env);
     gen_spr_power8_rpr(env);
+    gen_spr_power8_dpdes(env);
     gen_spr_power9_mmu(env);
 
     /* POWER9 Specific registers */
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 05/13] target/ppc: Add privileged message send facilities
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

Privileged message send facilities exist on POWER8 processors and later
and include a register and instructions which can be used to generate,
observe/modify the state of and clear privileged doorbell exceptions as
described below.

The Directed Privileged Doorbell Exception State (DPDES) register
reflects the state of pending privileged doorbell exceptions and can
also be used to modify that state. The register can be used to read and
modify the state of privileged doorbell exceptions for all threads of a
subprocessor and thus is a shared facility for that subprocessor. The
register can be read/written by the hypervisor and read by the
supervisor if enabled in the HFSCR, otherwise a hypervisor facility
unavailable exception is generated.

The privileged message send and clear instructions (msgsndp & msgclrp)
are used to generate and clear the presence of a directed privileged
doorbell exception, respectively. The msgsndp instruction can be used to
target any thread of the current subprocessor, msgclrp acts on the
thread issuing the instruction. These instructions are privileged, but
will generate a hypervisor facility unavailable exception if not enabled
in the HFSCR and executed in privileged non-hypervisor state.

Add and implement this register and instructions by reading or modifying the
pending interrupt state of the cpu.

Note that TCG only supports one thread per core and so we only need to
worry about the cpu making the access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/cpu.h                |  7 +++++
 target/ppc/excp_helper.c        | 63 +++++++++++++++++++++++++++++++++++++----
 target/ppc/helper.h             |  5 ++++
 target/ppc/misc_helper.c        | 46 ++++++++++++++++++++++++++++++
 target/ppc/translate.c          | 28 ++++++++++++++++++
 target/ppc/translate_init.inc.c | 40 ++++++++++++++++++++++++++
 6 files changed, 184 insertions(+), 5 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index e324064111..1d2a088391 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -425,6 +425,10 @@ typedef struct ppc_v3_pate_t {
 #define PSSCR_ESL         PPC_BIT(42) /* Enable State Loss */
 #define PSSCR_EC          PPC_BIT(43) /* Exit Criterion */
 
+/* HFSCR bits */
+#define HFSCR_MSGSNDP     PPC_BIT(53) /* Privileged Message Send Facilities */
+#define HFSCR_IC_MSGSNDP  0xA
+
 #define msr_sf   ((env->msr >> MSR_SF)   & 1)
 #define msr_isf  ((env->msr >> MSR_ISF)  & 1)
 #define msr_shv  ((env->msr >> MSR_SHV)  & 1)
@@ -1355,6 +1359,8 @@ void cpu_ppc_set_vhyp(PowerPCCPU *cpu, PPCVirtualHypervisor *vhyp);
 #endif
 
 void store_fpscr(CPUPPCState *env, uint64_t arg, uint32_t mask);
+void gen_hfscr_facility_check(DisasContext *ctx, int facility_sprn, int bit,
+                              int sprn, int cause);
 
 static inline uint64_t ppc_dump_gpr(CPUPPCState *env, int gprn)
 {
@@ -1501,6 +1507,7 @@ void ppc_compat_add_property(Object *obj, const char *name,
 #define SPR_MPC_ICTRL         (0x09E)
 #define SPR_MPC_BAR           (0x09F)
 #define SPR_PSPB              (0x09F)
+#define SPR_DPDES             (0x0B0)
 #define SPR_DAWR              (0x0B4)
 #define SPR_RPR               (0x0BA)
 #define SPR_CIABR             (0x0BB)
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index beafcf1ebd..7a4da7bdba 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -461,6 +461,13 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
         env->spr[SPR_FSCR] |= ((target_ulong)env->error_code << 56);
 #endif
         break;
+    case POWERPC_EXCP_HV_FU:     /* Hypervisor Facility Unavailable Exception */
+        env->spr[SPR_HFSCR] |= ((target_ulong)env->error_code << FSCR_IC_POS);
+        srr0 = SPR_HSRR0;
+        srr1 = SPR_HSRR1;
+        new_msr |= (target_ulong)MSR_HVB;
+        new_msr |= env->msr & ((target_ulong)1 << MSR_RI);
+        break;
     case POWERPC_EXCP_PIT:       /* Programmable interval timer interrupt    */
         LOG_EXCP("PIT exception\n");
         break;
@@ -884,7 +891,11 @@ static void ppc_hw_interrupt(CPUPPCState *env)
         }
         if (env->pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL)) {
             env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
-            powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DOORI);
+            if (env->insns_flags & PPC_SEGMENT_64B) {
+                powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_SDOOR);
+            } else {
+                powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DOORI);
+            }
             return;
         }
         if (env->pending_interrupts & (1 << PPC_INTERRUPT_HDOORBELL)) {
@@ -1202,19 +1213,26 @@ void helper_msgsnd(target_ulong rb)
 }
 
 /* Server Processor Control */
-static int book3s_dbell2irq(target_ulong rb)
+static int book3s_dbell2irq(target_ulong rb, bool hv_dbell)
 {
     int msg = rb & DBELL_TYPE_MASK;
 
     /* A Directed Hypervisor Doorbell message is sent only if the
      * message type is 5. All other types are reserved and the
      * instruction is a no-op */
-    return msg == DBELL_TYPE_DBELL_SERVER ? PPC_INTERRUPT_HDOORBELL : -1;
+    if (msg == DBELL_TYPE_DBELL_SERVER) {
+        if (hv_dbell)
+            return PPC_INTERRUPT_HDOORBELL;
+        else
+            return PPC_INTERRUPT_DOORBELL;
+    }
+
+    return -1;
 }
 
 void helper_book3s_msgclr(CPUPPCState *env, target_ulong rb)
 {
-    int irq = book3s_dbell2irq(rb);
+    int irq = book3s_dbell2irq(rb, 1);
 
     if (irq < 0) {
         return;
@@ -1225,7 +1243,42 @@ void helper_book3s_msgclr(CPUPPCState *env, target_ulong rb)
 
 void helper_book3s_msgsnd(target_ulong rb)
 {
-    int irq = book3s_dbell2irq(rb);
+    int irq = book3s_dbell2irq(rb, 1);
+    int pir = rb & DBELL_PROCIDTAG_MASK;
+    CPUState *cs;
+
+    if (irq < 0) {
+        return;
+    }
+
+    qemu_mutex_lock_iothread();
+    CPU_FOREACH(cs) {
+        PowerPCCPU *cpu = POWERPC_CPU(cs);
+        CPUPPCState *cenv = &cpu->env;
+
+        /* TODO: broadcast message to all threads of the same  processor */
+        if (cenv->spr_cb[SPR_PIR].default_value == pir) {
+            cenv->pending_interrupts |= 1 << irq;
+            cpu_interrupt(cs, CPU_INTERRUPT_HARD);
+        }
+    }
+    qemu_mutex_unlock_iothread();
+}
+
+void helper_book3s_msgclrp(CPUPPCState *env, target_ulong rb)
+{
+    int irq = book3s_dbell2irq(rb, 0);
+
+    if (irq < 0) {
+        return;
+    }
+
+    env->pending_interrupts &= ~(1 << irq);
+}
+
+void helper_book3s_msgsndp(target_ulong rb)
+{
+    int irq = book3s_dbell2irq(rb, 0);
     int pir = rb & DBELL_PROCIDTAG_MASK;
     CPUState *cs;
 
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index 6aee195528..040f59d1af 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -657,6 +657,8 @@ DEF_HELPER_1(msgsnd, void, tl)
 DEF_HELPER_2(msgclr, void, env, tl)
 DEF_HELPER_1(book3s_msgsnd, void, tl)
 DEF_HELPER_2(book3s_msgclr, void, env, tl)
+DEF_HELPER_1(book3s_msgsndp, void, tl)
+DEF_HELPER_2(book3s_msgclrp, void, env, tl)
 #endif
 
 DEF_HELPER_4(dlmzb, tl, env, tl, tl, i32)
@@ -674,6 +676,7 @@ DEF_HELPER_3(store_dcr, void, env, tl, tl)
 
 DEF_HELPER_2(load_dump_spr, void, env, i32)
 DEF_HELPER_2(store_dump_spr, void, env, i32)
+DEF_HELPER_4(hfscr_facility_check, void, env, i32, i32, i32)
 DEF_HELPER_4(fscr_facility_check, void, env, i32, i32, i32)
 DEF_HELPER_4(msr_facility_check, void, env, i32, i32, i32)
 DEF_HELPER_FLAGS_1(load_tbl, TCG_CALL_NO_RWG, tl, env)
@@ -688,6 +691,8 @@ DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
 DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_2(store_ptcr, void, env, tl)
+DEF_HELPER_FLAGS_1(load_dpdes, TCG_CALL_NO_RWG, tl, env)
+DEF_HELPER_FLAGS_2(store_dpdes, TCG_CALL_NO_RWG, void, env, tl)
 #endif
 DEF_HELPER_2(store_sdr1, void, env, tl)
 DEF_HELPER_2(store_pidr, void, env, tl)
diff --git a/target/ppc/misc_helper.c b/target/ppc/misc_helper.c
index c65d1ade15..d7d4acca7f 100644
--- a/target/ppc/misc_helper.c
+++ b/target/ppc/misc_helper.c
@@ -39,6 +39,17 @@ void helper_store_dump_spr(CPUPPCState *env, uint32_t sprn)
 }
 
 #ifdef TARGET_PPC64
+static void raise_hv_fu_exception(CPUPPCState *env, uint32_t bit,
+                                  uint32_t sprn, uint32_t cause,
+                                  uintptr_t raddr)
+{
+    qemu_log("Facility SPR %d is unavailable (SPR HFSCR:%d)\n", sprn, bit);
+
+    env->spr[SPR_HFSCR] &= ~((target_ulong)FSCR_IC_MASK << FSCR_IC_POS);
+
+    raise_exception_err_ra(env, POWERPC_EXCP_HV_FU, cause, raddr);
+}
+
 static void raise_fu_exception(CPUPPCState *env, uint32_t bit,
                                uint32_t sprn, uint32_t cause,
                                uintptr_t raddr)
@@ -53,6 +64,17 @@ static void raise_fu_exception(CPUPPCState *env, uint32_t bit,
 }
 #endif
 
+void helper_hfscr_facility_check(CPUPPCState *env, uint32_t bit,
+                                 uint32_t sprn, uint32_t cause)
+{
+#ifdef TARGET_PPC64
+    if ((env->msr_mask & MSR_HVB) && !msr_hv &&
+                                     !(env->spr[SPR_HFSCR] & (1UL << bit))) {
+        raise_hv_fu_exception(env, bit, sprn, cause, GETPC());
+    }
+#endif
+}
+
 void helper_fscr_facility_check(CPUPPCState *env, uint32_t bit,
                                 uint32_t sprn, uint32_t cause)
 {
@@ -107,6 +129,30 @@ void helper_store_pcr(CPUPPCState *env, target_ulong value)
 
     env->spr[SPR_PCR] = value & pcc->pcr_mask;
 }
+
+target_ulong helper_load_dpdes(CPUPPCState *env)
+{
+    helper_hfscr_facility_check(env, HFSCR_MSGSNDP, SPR_DPDES,
+                                HFSCR_IC_MSGSNDP);
+
+    if (env->pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL))
+        return 1;
+    return 0;
+}
+
+void helper_store_dpdes(CPUPPCState *env, target_ulong val)
+{
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
+    CPUState *cs = CPU(cpu);
+
+    if (val) {
+        /* Only one cpu for now */
+        env->pending_interrupts |= 1 << PPC_INTERRUPT_DOORBELL;
+        cpu_interrupt(cs, CPU_INTERRUPT_HARD);
+    } else {
+        env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
+    }
+}
 #endif /* defined(TARGET_PPC64) */
 
 void helper_store_pidr(CPUPPCState *env, target_ulong val)
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index fb42585a1c..2c3e83d18e 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -6537,6 +6537,30 @@ static void gen_msgsnd(DisasContext *ctx)
 #endif /* defined(CONFIG_USER_ONLY) */
 }
 
+static void gen_msgclrp(DisasContext *ctx)
+{
+#if defined(CONFIG_USER_ONLY)
+    GEN_PRIV;
+#else
+    CHK_SV;
+    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, 0,
+                             HFSCR_IC_MSGSNDP);
+    gen_helper_book3s_msgclrp(cpu_env, cpu_gpr[rB(ctx->opcode)]);
+#endif /* defined(CONFIG_USER_ONLY) */
+}
+
+static void gen_msgsndp(DisasContext *ctx)
+{
+#if defined(CONFIG_USER_ONLY)
+    GEN_PRIV;
+#else
+    CHK_SV;
+    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, 0,
+                             HFSCR_IC_MSGSNDP);
+    gen_helper_book3s_msgsndp(cpu_gpr[rB(ctx->opcode)]);
+#endif /* defined(CONFIG_USER_ONLY) */
+}
+
 static void gen_msgsync(DisasContext *ctx)
 {
 #if defined(CONFIG_USER_ONLY)
@@ -7054,6 +7078,10 @@ GEN_HANDLER2_E(msgclr, "msgclr", 0x1F, 0x0E, 0x07, 0x03ff0001,
                PPC_NONE, PPC2_PRCNTL),
 GEN_HANDLER2_E(msgsync, "msgsync", 0x1F, 0x16, 0x1B, 0x00000000,
                PPC_NONE, PPC2_PRCNTL),
+GEN_HANDLER2_E(msgsndp, "msgsndp", 0x1F, 0x0E, 0x04, 0x03ff0001,
+               PPC_NONE, PPC2_ISA207S),
+GEN_HANDLER2_E(msgclrp, "msgclrp", 0x1F, 0x0E, 0x05, 0x03ff0001,
+               PPC_NONE, PPC2_ISA207S),
 GEN_HANDLER(wrtee, 0x1F, 0x03, 0x04, 0x000FFC01, PPC_WRTEE),
 GEN_HANDLER(wrteei, 0x1F, 0x03, 0x05, 0x000E7C01, PPC_WRTEE),
 GEN_HANDLER(dlmzb, 0x1F, 0x0E, 0x02, 0x00000000, PPC_440_SPEC),
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 8e287066e5..46f9399097 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -454,6 +454,19 @@ static void spr_write_pcr(DisasContext *ctx, int sprn, int gprn)
 {
     gen_helper_store_pcr(cpu_env, cpu_gpr[gprn]);
 }
+
+/* DPDES */
+static void spr_read_dpdes(DisasContext *ctx, int gprn, int sprn)
+{
+    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, sprn,
+                             HFSCR_IC_MSGSNDP);
+    gen_helper_load_dpdes(cpu_gpr[gprn], cpu_env);
+}
+
+static void spr_write_dpdes(DisasContext *ctx, int sprn, int gprn)
+{
+    gen_helper_store_dpdes(cpu_env, cpu_gpr[gprn]);
+}
 #endif
 #endif
 
@@ -7478,6 +7491,20 @@ POWERPC_FAMILY(e600)(ObjectClass *oc, void *data)
 #define POWERPC970_HID5_INIT 0x00000000
 #endif
 
+void gen_hfscr_facility_check(DisasContext *ctx, int facility_sprn, int bit,
+                              int sprn, int cause)
+{
+    TCGv_i32 t1 = tcg_const_i32(bit);
+    TCGv_i32 t2 = tcg_const_i32(sprn);
+    TCGv_i32 t3 = tcg_const_i32(cause);
+
+    gen_helper_hfscr_facility_check(cpu_env, t1, t2, t3);
+
+    tcg_temp_free_i32(t3);
+    tcg_temp_free_i32(t2);
+    tcg_temp_free_i32(t1);
+}
+
 static void gen_fscr_facility_check(DisasContext *ctx, int facility_sprn,
                                     int bit, int sprn, int cause)
 {
@@ -8249,6 +8276,17 @@ static void gen_spr_power8_rpr(CPUPPCState *env)
 #endif
 }
 
+static void gen_spr_power8_dpdes(CPUPPCState *env)
+{
+#if !defined(CONFIG_USER_ONLY)
+    spr_register_kvm_hv(env, SPR_DPDES, "DPDES",
+                        SPR_NOACCESS, SPR_NOACCESS,
+                        &spr_read_dpdes, SPR_NOACCESS,
+                        &spr_read_dpdes, &spr_write_dpdes,
+                        KVM_REG_PPC_DPDES, 0x0UL);
+#endif
+}
+
 static void gen_spr_power9_mmu(CPUPPCState *env)
 {
 #if !defined(CONFIG_USER_ONLY)
@@ -8637,6 +8675,7 @@ static void init_proc_POWER8(CPUPPCState *env)
     gen_spr_power8_ic(env);
     gen_spr_power8_book4(env);
     gen_spr_power8_rpr(env);
+    gen_spr_power8_dpdes(env);
 
     /* env variables */
     env->dcache_line_size = 128;
@@ -8826,6 +8865,7 @@ static void init_proc_POWER9(CPUPPCState *env)
     gen_spr_power8_ic(env);
     gen_spr_power8_book4(env);
     gen_spr_power8_rpr(env);
+    gen_spr_power8_dpdes(env);
     gen_spr_power9_mmu(env);
 
     /* POWER9 Specific registers */
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 06/13] target/ppc: Enforce that the root page directory size must be at least 5
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

According to the ISA the root page directory size of a radix tree for
either process or partition scoped translation must be >= 5.

Thus add this to the list of conditions checked when validating the
partition table entry in validate_pate();

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/mmu-radix64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index a6ab290323..afa5ba506a 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -249,6 +249,8 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     if (lpid == 0 && !msr_hv) {
         return false;
     }
+    if ((pate->dw0 & PATE1_R_PRTS) < 5)
+        return false;
     /* More checks ... */
     return true;
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 06/13] target/ppc: Enforce that the root page directory size must be at least 5
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

According to the ISA the root page directory size of a radix tree for
either process or partition scoped translation must be >= 5.

Thus add this to the list of conditions checked when validating the
partition table entry in validate_pate();

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/mmu-radix64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index a6ab290323..afa5ba506a 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -249,6 +249,8 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     if (lpid == 0 && !msr_hv) {
         return false;
     }
+    if ((pate->dw0 & PATE1_R_PRTS) < 5)
+        return false;
     /* More checks ... */
     return true;
 }
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 07/13] target/ppc: Handle partition scoped radix tree translation
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

Radix tree translation is a 2 step process:

Process Scoped Translation:
Effective Address (EA) -> Virtual Address (VA)

Paritition Scoped Translation:
Virtual Address (VA) -> Real Address (RA)

Performed based on:
                                      MSR[HV]
           -----------------------------------------------
           |             |     HV = 0    |     HV = 1    |
           -----------------------------------------------
           | Relocation  |   Partition   |      No       |
           | = Off       |    Scoped     |  Translation  |
Relocation -----------------------------------------------
           | Relocation  |  Partition &  |    Process    |
           | = On        |Process Scoped |    Scoped     |
           -----------------------------------------------

Currently only process scoped translation is handled.
Implement partitition scoped translation.

The process of using the radix trees to perform partition scoped
translation is identical to process scoped translation, however
hypervisor exceptions are generated, and thus we can reuse the radix
tree traversing code.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/cpu.h         |   2 +
 target/ppc/excp_helper.c |   3 +-
 target/ppc/mmu-radix64.c | 407 +++++++++++++++++++++++++++++++++--------------
 3 files changed, 293 insertions(+), 119 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 1d2a088391..3acc248f40 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -501,6 +501,8 @@ typedef struct ppc_v3_pate_t {
 /* Unsupported Radix Tree Configuration */
 #define DSISR_R_BADCONFIG        0x00080000
 #define DSISR_ATOMIC_RC          0x00040000
+/* Unable to translate address of (guest) pde or process/page table entry */
+#define DSISR_PRTABLE_FAULT      0x00020000
 
 /* SRR1 error code fields */
 
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 7a4da7bdba..10091d4624 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -441,9 +441,10 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     case POWERPC_EXCP_ISEG:      /* Instruction segment exception            */
     case POWERPC_EXCP_TRACE:     /* Trace exception                          */
         break;
+    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
+        msr |= env->error_code;
     case POWERPC_EXCP_HDECR:     /* Hypervisor decrementer exception         */
     case POWERPC_EXCP_HDSI:      /* Hypervisor data storage exception        */
-    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
     case POWERPC_EXCP_HDSEG:     /* Hypervisor data segment exception        */
     case POWERPC_EXCP_HISEG:     /* Hypervisor instruction segment exception */
     case POWERPC_EXCP_SDOOR_HV:  /* Hypervisor Doorbell interrupt            */
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index afa5ba506a..6118ad1b00 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -112,9 +112,31 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
     }
 }
 
+static void ppc_radix64_raise_hsi(PowerPCCPU *cpu, int rwx, vaddr eaddr,
+                                  hwaddr g_raddr, uint32_t cause)
+{
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
+
+    if (rwx == 2) { /* H Instruction Storage Interrupt */
+        cs->exception_index = POWERPC_EXCP_HISI;
+        env->spr[SPR_ASDR] = g_raddr;
+        env->error_code = cause;
+    } else { /* H Data Storage Interrupt */
+        cs->exception_index = POWERPC_EXCP_HDSI;
+        if (rwx == 1) { /* Write -> Store */
+            cause |= DSISR_ISSTORE;
+        }
+        env->spr[SPR_HDSISR] = cause;
+        env->spr[SPR_HDAR] = eaddr;
+        env->spr[SPR_ASDR] = g_raddr;
+        env->error_code = 0;
+    }
+}
 
 static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
-                                   int *fault_cause, int *prot)
+                                   int *fault_cause, int *prot,
+                                   bool partition_scoped)
 {
     CPUPPCState *env = &cpu->env;
     const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
@@ -130,11 +152,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
     }
 
     /* Determine permissions allowed by Encoded Access Authority */
-    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
+    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
         *prot = 0;
-    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
+    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
         *prot = ppc_radix64_get_prot_eaa(pte);
-    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
+    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
         *prot = ppc_radix64_get_prot_eaa(pte);
         *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
     }
@@ -199,44 +221,196 @@ static uint64_t ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte, hwadd
     return npte;
 }
 
-static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
-                                      uint64_t base_addr, uint64_t nls,
-                                      hwaddr *raddr, int *psize,
-                                      int *fault_cause, hwaddr *pte_addr)
+static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
+                                       uint64_t *pte_addr, uint64_t *nls,
+                                       int *psize, int *fault_cause)
 {
     CPUState *cs = CPU(cpu);
     uint64_t index, pde;
 
-    if (nls < 5) { /* Directory maps less than 2**5 entries */
+    if (*nls < 5) { /* Directory maps less than 2**5 entries */
         *fault_cause |= DSISR_R_BADCONFIG;
         return 0;
     }
 
     /* Read page <directory/table> entry from guest address space */
-    index = eaddr >> (*psize - nls); /* Shift */
-    index &= ((1UL << nls) - 1); /* Mask */
-    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
-    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
+    pde = ldq_phys(cs->as, *pte_addr);
+    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
         *fault_cause |= DSISR_NOPTE;
         return 0;
     }
 
-    *psize -= nls;
+    *psize -= *nls;
+    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
+        *nls = pde & R_PDE_NLS;
+        index = eaddr >> (*psize - *nls);       /* Shift */
+        index &= ((1UL << *nls) - 1);           /* Mask */
+        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
+    }
+    return pde;
+}
+
+static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
+                                      uint64_t base_addr, uint64_t nls,
+                                      hwaddr *raddr, int *psize,
+                                      int *fault_cause, hwaddr *pte_addr)
+{
+    uint64_t index, pde;
+
+    index = eaddr >> (*psize - nls);    /* Shift */
+    index &= ((1UL << nls) - 1);       /* Mask */
+    *pte_addr = base_addr + (index * sizeof(pde));
+    do {
+        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
+                                     fault_cause);
+    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
 
-    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
-    if (pde & R_PTE_LEAF) {
+    /* Did we find a valid leaf? */
+    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
         uint64_t rpn = pde & R_PTE_RPN;
         uint64_t mask = (1UL << *psize) - 1;
 
         /* Or high bits of rpn and low bits to ea to form whole real addr */
         *raddr = (rpn & ~mask) | (eaddr & mask);
-        *pte_addr = base_addr + (index * sizeof(pde));
-        return pde;
     }
 
-    /* Next Level of Radix Tree */
-    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
-                                 raddr, psize, fault_cause, pte_addr);
+    return pde;
+}
+
+static int ppc_radix64_partition_scoped_xlate(PowerPCCPU *cpu, int rwx,
+                                              vaddr eaddr, hwaddr g_raddr,
+                                              ppc_v3_pate_t pate,
+                                              hwaddr *h_raddr, int *h_prot,
+                                              int *h_page_size, bool pde_addr,
+                                              bool cause_excp)
+{
+    CPUPPCState *env = &cpu->env;
+    int fault_cause = 0;
+    hwaddr pte_addr;
+    uint64_t pte;
+
+restart:
+    *h_page_size = PRTBE_R_GET_RTS(pate.dw0);
+    pte = ppc_radix64_walk_tree(cpu, g_raddr, pate.dw0 & PRTBE_R_RPDB,
+                                pate.dw0 & PRTBE_R_RPDS, h_raddr, h_page_size,
+                                &fault_cause, &pte_addr);
+    /* No valid pte or access denied due to protection */
+    if (!(pte & R_PTE_VALID) ||
+            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, h_prot, 1)) {
+        if (pde_addr) /* address being translated was that of a guest pde */
+            fault_cause |= DSISR_PRTABLE_FAULT;
+        if (cause_excp)
+            ppc_radix64_raise_hsi(cpu, rwx, eaddr, g_raddr, fault_cause);
+        return 1;
+    }
+
+    /* Update Reference and Change Bits */
+    if (ppc_radix64_hw_rc_updates(env)) {
+        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
+        if (!pte) {
+            goto restart;
+        }
+    }
+
+    /* If the page doesn't have C, treat it as read only */
+    if (!(pte & R_PTE_C))
+        *h_prot &= ~PAGE_WRITE;
+
+    return 0;
+}
+
+static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
+                                            vaddr eaddr, uint64_t lpid, uint64_t pid,
+                                            ppc_v3_pate_t pate, hwaddr *g_raddr,
+                                            int *g_prot, int *g_page_size,
+                                            bool cause_excp)
+{
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
+    uint64_t offset, size, prtbe_addr, prtbe0, base_addr, nls, index, pte;
+    int fault_cause = 0, h_page_size, h_prot, ret;
+    hwaddr h_raddr, pte_addr;
+
+    /* Index Process Table by PID to Find Corresponding Process Table Entry */
+    offset = pid * sizeof(struct prtb_entry);
+    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
+    if (offset >= size) {
+        /* offset exceeds size of the process table */
+        if (cause_excp)
+            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+        return 1;
+    }
+    prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
+    /* address subject to partition scoped translation */
+    if (cpu->vhyp && (lpid == 0)) {
+        prtbe0 = ldq_phys(cs->as, prtbe_addr);
+    } else {
+        ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, prtbe_addr,
+                                                 pate, &h_raddr, &h_prot,
+                                                 &h_page_size, 1, 1);
+        if (ret)
+            return ret;
+        prtbe0 = ldq_phys(cs->as, h_raddr);
+    }
+
+    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
+restart:
+    *g_page_size = PRTBE_R_GET_RTS(prtbe0);
+    base_addr = prtbe0 & PRTBE_R_RPDB;
+    nls = prtbe0 & PRTBE_R_RPDS;
+    if (msr_hv || (cpu->vhyp && (lpid == 0))) {
+        /* Can treat process tree addresses as real addresses */
+        pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK, base_addr, nls,
+                                    g_raddr, g_page_size, &fault_cause,
+                                    &pte_addr);
+    } else {
+        index = (eaddr & R_EADDR_MASK) >> (*g_page_size - nls); /* Shift */
+        index &= ((1UL << nls) - 1);                            /* Mask */
+        pte_addr = base_addr + (index * sizeof(pte));
+
+        /* Each process tree address subject to partition scoped translation */
+        do {
+            ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, pte_addr,
+                                                     pate, &h_raddr, &h_prot,
+                                                     &h_page_size, 1, 1);
+            if (ret)
+                return ret;
+
+            pte = ppc_radix64_next_level(cpu, eaddr & R_EADDR_MASK, &h_raddr,
+                                         &nls, g_page_size, &fault_cause);
+            pte_addr = h_raddr;
+        } while ((pte & R_PTE_VALID) && !(pte & R_PTE_LEAF));
+
+        /* Did we find a valid leaf? */
+        if ((pte & R_PTE_VALID) && (pte & R_PTE_LEAF)) {
+            uint64_t rpn = pte & R_PTE_RPN;
+            uint64_t mask = (1UL << *g_page_size) - 1;
+
+            /* Or high bits of rpn and low bits to ea to form whole real addr */
+            *g_raddr = (rpn & ~mask) | (eaddr & mask);
+        }
+    }
+
+    if (!(pte & R_PTE_VALID) ||
+            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {
+        /* No valid pte or access denied due to protection */
+        if (cause_excp)
+            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
+        return 1;
+    }
+
+    /* Update Reference and Change Bits */
+    if (ppc_radix64_hw_rc_updates(env)) {
+        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
+        if (!pte)
+            goto restart;
+    }
+
+    /* If the page doesn't have C, treat it as read only */
+    if (!(pte & R_PTE_C))
+        *g_prot &= ~PAGE_WRITE;
+
+    return 0;
 }
 
 static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
@@ -255,22 +429,99 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     return true;
 }
 
+static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
+                             uint64_t lpid, uint64_t pid, bool relocation,
+                             hwaddr *raddr, int *psizep, int *protp,
+                             bool cause_excp)
+{
+    CPUPPCState *env = &cpu->env;
+    ppc_v3_pate_t pate;
+    int psize, prot;
+    hwaddr g_raddr;
+
+    *psizep = INT_MAX;
+    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
+
+    /* Get Process Table */
+    if (cpu->vhyp && (lpid == 0)) {
+        PPCVirtualHypervisorClass *vhc;
+        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
+        vhc->get_pate(cpu->vhyp, &pate);
+    } else {
+        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
+            if (cause_excp)
+                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+            return 1;
+        }
+        if (!validate_pate(cpu, lpid, &pate)) {
+            if (cause_excp)
+                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
+            return 1;
+        }
+    }
+
+    /*
+     * Radix tree translation is a 2 step translation process:
+     * 1. Process Scoped translation - Guest Eff Addr -> Guest Real Addr
+     * 2. Partition Scoped translation - Guest Real Addr -> Host Real Addr
+     *
+     *                                       MSR[HV]
+     *            -----------------------------------------------
+     *            |             |     HV = 0    |     HV = 1    |
+     *            -----------------------------------------------
+     *            | Relocation  |   Partition   |      No       |
+     *            | = Off       |    Scoped     |  Translation  |
+     * Relocation -----------------------------------------------
+     *            | Relocation  |  Partition &  |    Process    |
+     *            | = On        |Process Scoped |    Scoped     |
+     *            -----------------------------------------------
+     */
+
+    /* Perform process scoped translation if relocation enabled */
+    if (relocation) {
+        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
+                                                   pate, &g_raddr, &prot,
+                                                   &psize, cause_excp);
+        if (ret)
+            return ret;
+        *psizep = MIN(*psizep, psize);
+        *protp &= prot;
+    } else {
+        g_raddr = eaddr & R_EADDR_MASK;
+    }
+
+    /* Perform partition scoped xlate if !HV or HV access to quadrants 1 or 2 */
+    if ((lpid != 0) || (!cpu->vhyp && !msr_hv)) {
+        int ret = ppc_radix64_partition_scoped_xlate(cpu, rwx, eaddr, g_raddr,
+                                                     pate, raddr, &prot, &psize,
+                                                     0, cause_excp);
+        if (ret)
+            return ret;
+        *psizep = MIN(*psizep, psize);
+        *protp &= prot;
+    } else {
+        *raddr = g_raddr;
+    }
+
+    return 0;
+}
+
 int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
                                  int mmu_idx)
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    PPCVirtualHypervisorClass *vhc;
-    hwaddr raddr, pte_addr;
-    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
-    int page_size, prot, fault_cause = 0;
-    ppc_v3_pate_t pate;
+    uint64_t pid, lpid = env->spr[SPR_LPIDR];
+    int psize, prot;
+    bool relocation;
+    hwaddr raddr;
 
+    assert(!(msr_hv && cpu->vhyp));
     assert((rwx == 0) || (rwx == 1) || (rwx == 2));
 
+    relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
     /* HV or virtual hypervisor Real Mode Access */
-    if ((msr_hv || cpu->vhyp) &&
-        (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0)))) {
+    if (!relocation && (msr_hv || (cpu->vhyp && (lpid == 0)))) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
 
@@ -294,75 +545,26 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
         return 1;
     }
 
-    /* Get Process Table */
-    if (cpu->vhyp) {
-        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
-        vhc->get_pate(cpu->vhyp, &pate);
-    } else {
-        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
-            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
-            return 1;
-        }
-        if (!validate_pate(cpu, lpid, &pate)) {
-            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
-        }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-       }
-    }
-
-    /* Index Process Table by PID to Find Corresponding Process Table Entry */
-    offset = pid * sizeof(struct prtb_entry);
-    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
-    if (offset >= size) {
-        /* offset exceeds size of the process table */
-        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
-        return 1;
-    }
-    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
-
-    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
-    page_size = PRTBE_R_GET_RTS(prtbe0);
- restart:
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                &raddr, &page_size, &fault_cause, &pte_addr);
-    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
-        /* Couldn't get pte or access denied due to protection */
-        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
+    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
+    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
+                          &psize, &prot, 1)) {
         return 1;
     }
 
-    /* Update Reference and Change Bits */
-    if (ppc_radix64_hw_rc_updates(env)) {
-        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
-        if (!pte) {
-            goto restart;
-        }
-    }
-    /* If the page doesn't have C, treat it as read only */
-    if (!(pte & R_PTE_C)) {
-        prot &= ~PAGE_WRITE;
-    }
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
-                 prot, mmu_idx, 1UL << page_size);
+                 prot, mmu_idx, 1UL << psize);
     return 0;
 }
 
 hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
 {
-    CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    PPCVirtualHypervisorClass *vhc;
-    hwaddr raddr, pte_addr;
-    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
-    int page_size, fault_cause = 0;
-    ppc_v3_pate_t pate;
+    uint64_t lpid = 0, pid = 0;
+    int psize, prot;
+    hwaddr raddr;
 
     /* Handle Real Mode */
-    if (msr_dr == 0) {
+    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && (lpid == 0)))) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         return eaddr & 0x0FFFFFFFFFFFFFFFULL;
     }
@@ -372,39 +574,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
         return -1;
     }
 
-    /* Get Process Table */
-    if (cpu->vhyp) {
-        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
-        vhc->get_pate(cpu->vhyp, &pate);
-    } else {
-        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
-            return -1;
-        }
-        if (!validate_pate(cpu, lpid, &pate)) {
-            return -1;
-        }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-       }
-    }
-
-    /* Index Process Table by PID to Find Corresponding Process Table Entry */
-    offset = pid * sizeof(struct prtb_entry);
-    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
-    if (offset >= size) {
-        /* offset exceeds size of the process table */
-        return -1;
-    }
-    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
-
-    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
-    page_size = PRTBE_R_GET_RTS(prtbe0);
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                &raddr, &page_size, &fault_cause, &pte_addr);
-    if (!pte) {
+    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
+                          &prot, 0)) {
         return -1;
     }
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 07/13] target/ppc: Handle partition scoped radix tree translation
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

Radix tree translation is a 2 step process:

Process Scoped Translation:
Effective Address (EA) -> Virtual Address (VA)

Paritition Scoped Translation:
Virtual Address (VA) -> Real Address (RA)

Performed based on:
                                      MSR[HV]
           -----------------------------------------------
           |             |     HV = 0    |     HV = 1    |
           -----------------------------------------------
           | Relocation  |   Partition   |      No       |
           | = Off       |    Scoped     |  Translation  |
Relocation -----------------------------------------------
           | Relocation  |  Partition &  |    Process    |
           | = On        |Process Scoped |    Scoped     |
           -----------------------------------------------

Currently only process scoped translation is handled.
Implement partitition scoped translation.

The process of using the radix trees to perform partition scoped
translation is identical to process scoped translation, however
hypervisor exceptions are generated, and thus we can reuse the radix
tree traversing code.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 target/ppc/cpu.h         |   2 +
 target/ppc/excp_helper.c |   3 +-
 target/ppc/mmu-radix64.c | 407 +++++++++++++++++++++++++++++++++--------------
 3 files changed, 293 insertions(+), 119 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 1d2a088391..3acc248f40 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -501,6 +501,8 @@ typedef struct ppc_v3_pate_t {
 /* Unsupported Radix Tree Configuration */
 #define DSISR_R_BADCONFIG        0x00080000
 #define DSISR_ATOMIC_RC          0x00040000
+/* Unable to translate address of (guest) pde or process/page table entry */
+#define DSISR_PRTABLE_FAULT      0x00020000
 
 /* SRR1 error code fields */
 
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 7a4da7bdba..10091d4624 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -441,9 +441,10 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     case POWERPC_EXCP_ISEG:      /* Instruction segment exception            */
     case POWERPC_EXCP_TRACE:     /* Trace exception                          */
         break;
+    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
+        msr |= env->error_code;
     case POWERPC_EXCP_HDECR:     /* Hypervisor decrementer exception         */
     case POWERPC_EXCP_HDSI:      /* Hypervisor data storage exception        */
-    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
     case POWERPC_EXCP_HDSEG:     /* Hypervisor data segment exception        */
     case POWERPC_EXCP_HISEG:     /* Hypervisor instruction segment exception */
     case POWERPC_EXCP_SDOOR_HV:  /* Hypervisor Doorbell interrupt            */
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index afa5ba506a..6118ad1b00 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -112,9 +112,31 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
     }
 }
 
+static void ppc_radix64_raise_hsi(PowerPCCPU *cpu, int rwx, vaddr eaddr,
+                                  hwaddr g_raddr, uint32_t cause)
+{
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
+
+    if (rwx == 2) { /* H Instruction Storage Interrupt */
+        cs->exception_index = POWERPC_EXCP_HISI;
+        env->spr[SPR_ASDR] = g_raddr;
+        env->error_code = cause;
+    } else { /* H Data Storage Interrupt */
+        cs->exception_index = POWERPC_EXCP_HDSI;
+        if (rwx == 1) { /* Write -> Store */
+            cause |= DSISR_ISSTORE;
+        }
+        env->spr[SPR_HDSISR] = cause;
+        env->spr[SPR_HDAR] = eaddr;
+        env->spr[SPR_ASDR] = g_raddr;
+        env->error_code = 0;
+    }
+}
 
 static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
-                                   int *fault_cause, int *prot)
+                                   int *fault_cause, int *prot,
+                                   bool partition_scoped)
 {
     CPUPPCState *env = &cpu->env;
     const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
@@ -130,11 +152,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
     }
 
     /* Determine permissions allowed by Encoded Access Authority */
-    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
+    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
         *prot = 0;
-    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
+    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
         *prot = ppc_radix64_get_prot_eaa(pte);
-    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
+    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
         *prot = ppc_radix64_get_prot_eaa(pte);
         *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
     }
@@ -199,44 +221,196 @@ static uint64_t ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte, hwadd
     return npte;
 }
 
-static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
-                                      uint64_t base_addr, uint64_t nls,
-                                      hwaddr *raddr, int *psize,
-                                      int *fault_cause, hwaddr *pte_addr)
+static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
+                                       uint64_t *pte_addr, uint64_t *nls,
+                                       int *psize, int *fault_cause)
 {
     CPUState *cs = CPU(cpu);
     uint64_t index, pde;
 
-    if (nls < 5) { /* Directory maps less than 2**5 entries */
+    if (*nls < 5) { /* Directory maps less than 2**5 entries */
         *fault_cause |= DSISR_R_BADCONFIG;
         return 0;
     }
 
     /* Read page <directory/table> entry from guest address space */
-    index = eaddr >> (*psize - nls); /* Shift */
-    index &= ((1UL << nls) - 1); /* Mask */
-    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
-    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
+    pde = ldq_phys(cs->as, *pte_addr);
+    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
         *fault_cause |= DSISR_NOPTE;
         return 0;
     }
 
-    *psize -= nls;
+    *psize -= *nls;
+    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
+        *nls = pde & R_PDE_NLS;
+        index = eaddr >> (*psize - *nls);       /* Shift */
+        index &= ((1UL << *nls) - 1);           /* Mask */
+        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
+    }
+    return pde;
+}
+
+static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
+                                      uint64_t base_addr, uint64_t nls,
+                                      hwaddr *raddr, int *psize,
+                                      int *fault_cause, hwaddr *pte_addr)
+{
+    uint64_t index, pde;
+
+    index = eaddr >> (*psize - nls);    /* Shift */
+    index &= ((1UL << nls) - 1);       /* Mask */
+    *pte_addr = base_addr + (index * sizeof(pde));
+    do {
+        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
+                                     fault_cause);
+    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
 
-    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
-    if (pde & R_PTE_LEAF) {
+    /* Did we find a valid leaf? */
+    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
         uint64_t rpn = pde & R_PTE_RPN;
         uint64_t mask = (1UL << *psize) - 1;
 
         /* Or high bits of rpn and low bits to ea to form whole real addr */
         *raddr = (rpn & ~mask) | (eaddr & mask);
-        *pte_addr = base_addr + (index * sizeof(pde));
-        return pde;
     }
 
-    /* Next Level of Radix Tree */
-    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
-                                 raddr, psize, fault_cause, pte_addr);
+    return pde;
+}
+
+static int ppc_radix64_partition_scoped_xlate(PowerPCCPU *cpu, int rwx,
+                                              vaddr eaddr, hwaddr g_raddr,
+                                              ppc_v3_pate_t pate,
+                                              hwaddr *h_raddr, int *h_prot,
+                                              int *h_page_size, bool pde_addr,
+                                              bool cause_excp)
+{
+    CPUPPCState *env = &cpu->env;
+    int fault_cause = 0;
+    hwaddr pte_addr;
+    uint64_t pte;
+
+restart:
+    *h_page_size = PRTBE_R_GET_RTS(pate.dw0);
+    pte = ppc_radix64_walk_tree(cpu, g_raddr, pate.dw0 & PRTBE_R_RPDB,
+                                pate.dw0 & PRTBE_R_RPDS, h_raddr, h_page_size,
+                                &fault_cause, &pte_addr);
+    /* No valid pte or access denied due to protection */
+    if (!(pte & R_PTE_VALID) ||
+            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, h_prot, 1)) {
+        if (pde_addr) /* address being translated was that of a guest pde */
+            fault_cause |= DSISR_PRTABLE_FAULT;
+        if (cause_excp)
+            ppc_radix64_raise_hsi(cpu, rwx, eaddr, g_raddr, fault_cause);
+        return 1;
+    }
+
+    /* Update Reference and Change Bits */
+    if (ppc_radix64_hw_rc_updates(env)) {
+        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
+        if (!pte) {
+            goto restart;
+        }
+    }
+
+    /* If the page doesn't have C, treat it as read only */
+    if (!(pte & R_PTE_C))
+        *h_prot &= ~PAGE_WRITE;
+
+    return 0;
+}
+
+static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
+                                            vaddr eaddr, uint64_t lpid, uint64_t pid,
+                                            ppc_v3_pate_t pate, hwaddr *g_raddr,
+                                            int *g_prot, int *g_page_size,
+                                            bool cause_excp)
+{
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
+    uint64_t offset, size, prtbe_addr, prtbe0, base_addr, nls, index, pte;
+    int fault_cause = 0, h_page_size, h_prot, ret;
+    hwaddr h_raddr, pte_addr;
+
+    /* Index Process Table by PID to Find Corresponding Process Table Entry */
+    offset = pid * sizeof(struct prtb_entry);
+    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
+    if (offset >= size) {
+        /* offset exceeds size of the process table */
+        if (cause_excp)
+            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+        return 1;
+    }
+    prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
+    /* address subject to partition scoped translation */
+    if (cpu->vhyp && (lpid == 0)) {
+        prtbe0 = ldq_phys(cs->as, prtbe_addr);
+    } else {
+        ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, prtbe_addr,
+                                                 pate, &h_raddr, &h_prot,
+                                                 &h_page_size, 1, 1);
+        if (ret)
+            return ret;
+        prtbe0 = ldq_phys(cs->as, h_raddr);
+    }
+
+    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
+restart:
+    *g_page_size = PRTBE_R_GET_RTS(prtbe0);
+    base_addr = prtbe0 & PRTBE_R_RPDB;
+    nls = prtbe0 & PRTBE_R_RPDS;
+    if (msr_hv || (cpu->vhyp && (lpid == 0))) {
+        /* Can treat process tree addresses as real addresses */
+        pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK, base_addr, nls,
+                                    g_raddr, g_page_size, &fault_cause,
+                                    &pte_addr);
+    } else {
+        index = (eaddr & R_EADDR_MASK) >> (*g_page_size - nls); /* Shift */
+        index &= ((1UL << nls) - 1);                            /* Mask */
+        pte_addr = base_addr + (index * sizeof(pte));
+
+        /* Each process tree address subject to partition scoped translation */
+        do {
+            ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, pte_addr,
+                                                     pate, &h_raddr, &h_prot,
+                                                     &h_page_size, 1, 1);
+            if (ret)
+                return ret;
+
+            pte = ppc_radix64_next_level(cpu, eaddr & R_EADDR_MASK, &h_raddr,
+                                         &nls, g_page_size, &fault_cause);
+            pte_addr = h_raddr;
+        } while ((pte & R_PTE_VALID) && !(pte & R_PTE_LEAF));
+
+        /* Did we find a valid leaf? */
+        if ((pte & R_PTE_VALID) && (pte & R_PTE_LEAF)) {
+            uint64_t rpn = pte & R_PTE_RPN;
+            uint64_t mask = (1UL << *g_page_size) - 1;
+
+            /* Or high bits of rpn and low bits to ea to form whole real addr */
+            *g_raddr = (rpn & ~mask) | (eaddr & mask);
+        }
+    }
+
+    if (!(pte & R_PTE_VALID) ||
+            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {
+        /* No valid pte or access denied due to protection */
+        if (cause_excp)
+            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
+        return 1;
+    }
+
+    /* Update Reference and Change Bits */
+    if (ppc_radix64_hw_rc_updates(env)) {
+        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
+        if (!pte)
+            goto restart;
+    }
+
+    /* If the page doesn't have C, treat it as read only */
+    if (!(pte & R_PTE_C))
+        *g_prot &= ~PAGE_WRITE;
+
+    return 0;
 }
 
 static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
@@ -255,22 +429,99 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     return true;
 }
 
+static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
+                             uint64_t lpid, uint64_t pid, bool relocation,
+                             hwaddr *raddr, int *psizep, int *protp,
+                             bool cause_excp)
+{
+    CPUPPCState *env = &cpu->env;
+    ppc_v3_pate_t pate;
+    int psize, prot;
+    hwaddr g_raddr;
+
+    *psizep = INT_MAX;
+    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
+
+    /* Get Process Table */
+    if (cpu->vhyp && (lpid == 0)) {
+        PPCVirtualHypervisorClass *vhc;
+        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
+        vhc->get_pate(cpu->vhyp, &pate);
+    } else {
+        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
+            if (cause_excp)
+                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+            return 1;
+        }
+        if (!validate_pate(cpu, lpid, &pate)) {
+            if (cause_excp)
+                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
+            return 1;
+        }
+    }
+
+    /*
+     * Radix tree translation is a 2 step translation process:
+     * 1. Process Scoped translation - Guest Eff Addr -> Guest Real Addr
+     * 2. Partition Scoped translation - Guest Real Addr -> Host Real Addr
+     *
+     *                                       MSR[HV]
+     *            -----------------------------------------------
+     *            |             |     HV = 0    |     HV = 1    |
+     *            -----------------------------------------------
+     *            | Relocation  |   Partition   |      No       |
+     *            | = Off       |    Scoped     |  Translation  |
+     * Relocation -----------------------------------------------
+     *            | Relocation  |  Partition &  |    Process    |
+     *            | = On        |Process Scoped |    Scoped     |
+     *            -----------------------------------------------
+     */
+
+    /* Perform process scoped translation if relocation enabled */
+    if (relocation) {
+        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
+                                                   pate, &g_raddr, &prot,
+                                                   &psize, cause_excp);
+        if (ret)
+            return ret;
+        *psizep = MIN(*psizep, psize);
+        *protp &= prot;
+    } else {
+        g_raddr = eaddr & R_EADDR_MASK;
+    }
+
+    /* Perform partition scoped xlate if !HV or HV access to quadrants 1 or 2 */
+    if ((lpid != 0) || (!cpu->vhyp && !msr_hv)) {
+        int ret = ppc_radix64_partition_scoped_xlate(cpu, rwx, eaddr, g_raddr,
+                                                     pate, raddr, &prot, &psize,
+                                                     0, cause_excp);
+        if (ret)
+            return ret;
+        *psizep = MIN(*psizep, psize);
+        *protp &= prot;
+    } else {
+        *raddr = g_raddr;
+    }
+
+    return 0;
+}
+
 int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
                                  int mmu_idx)
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    PPCVirtualHypervisorClass *vhc;
-    hwaddr raddr, pte_addr;
-    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
-    int page_size, prot, fault_cause = 0;
-    ppc_v3_pate_t pate;
+    uint64_t pid, lpid = env->spr[SPR_LPIDR];
+    int psize, prot;
+    bool relocation;
+    hwaddr raddr;
 
+    assert(!(msr_hv && cpu->vhyp));
     assert((rwx == 0) || (rwx == 1) || (rwx == 2));
 
+    relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
     /* HV or virtual hypervisor Real Mode Access */
-    if ((msr_hv || cpu->vhyp) &&
-        (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0)))) {
+    if (!relocation && (msr_hv || (cpu->vhyp && (lpid == 0)))) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
 
@@ -294,75 +545,26 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
         return 1;
     }
 
-    /* Get Process Table */
-    if (cpu->vhyp) {
-        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
-        vhc->get_pate(cpu->vhyp, &pate);
-    } else {
-        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
-            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
-            return 1;
-        }
-        if (!validate_pate(cpu, lpid, &pate)) {
-            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
-        }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-       }
-    }
-
-    /* Index Process Table by PID to Find Corresponding Process Table Entry */
-    offset = pid * sizeof(struct prtb_entry);
-    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
-    if (offset >= size) {
-        /* offset exceeds size of the process table */
-        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
-        return 1;
-    }
-    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
-
-    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
-    page_size = PRTBE_R_GET_RTS(prtbe0);
- restart:
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                &raddr, &page_size, &fault_cause, &pte_addr);
-    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
-        /* Couldn't get pte or access denied due to protection */
-        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
+    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
+    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
+                          &psize, &prot, 1)) {
         return 1;
     }
 
-    /* Update Reference and Change Bits */
-    if (ppc_radix64_hw_rc_updates(env)) {
-        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
-        if (!pte) {
-            goto restart;
-        }
-    }
-    /* If the page doesn't have C, treat it as read only */
-    if (!(pte & R_PTE_C)) {
-        prot &= ~PAGE_WRITE;
-    }
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
-                 prot, mmu_idx, 1UL << page_size);
+                 prot, mmu_idx, 1UL << psize);
     return 0;
 }
 
 hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
 {
-    CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    PPCVirtualHypervisorClass *vhc;
-    hwaddr raddr, pte_addr;
-    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
-    int page_size, fault_cause = 0;
-    ppc_v3_pate_t pate;
+    uint64_t lpid = 0, pid = 0;
+    int psize, prot;
+    hwaddr raddr;
 
     /* Handle Real Mode */
-    if (msr_dr == 0) {
+    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && (lpid == 0)))) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         return eaddr & 0x0FFFFFFFFFFFFFFFULL;
     }
@@ -372,39 +574,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
         return -1;
     }
 
-    /* Get Process Table */
-    if (cpu->vhyp) {
-        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
-        vhc->get_pate(cpu->vhyp, &pate);
-    } else {
-        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
-            return -1;
-        }
-        if (!validate_pate(cpu, lpid, &pate)) {
-            return -1;
-        }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-       }
-    }
-
-    /* Index Process Table by PID to Find Corresponding Process Table Entry */
-    offset = pid * sizeof(struct prtb_entry);
-    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
-    if (offset >= size) {
-        /* offset exceeds size of the process table */
-        return -1;
-    }
-    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
-
-    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
-    page_size = PRTBE_R_GET_RTS(prtbe0);
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                &raddr, &page_size, &fault_cause, &pte_addr);
-    if (!pte) {
+    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
+                          &prot, 0)) {
         return -1;
     }
 
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 08/13] target/ppc: Implement hcall H_SET_PARTITION_TABLE
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The hcall H_SET_PARTITION_TABLE is used by a guest acting as a nested
hypervisor to register the partition table entry for one of its guests
with the real hypervisor.

Implement this hcall for a spapr guest.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c   | 22 ++++++++++++++++++++++
 include/hw/ppc/spapr.h |  4 +++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 4d7fe337a1..704ceff8e1 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1828,6 +1828,25 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, SpaprMachineState *spapr,
     return H_SUCCESS;
 }
 
+static target_ulong h_set_partition_table(PowerPCCPU *cpu,
+                                          SpaprMachineState *spapr,
+                                          target_ulong opcode,
+                                          target_ulong *args)
+{
+    CPUPPCState *env = &cpu->env;
+    target_ulong ptcr = args[0];
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    if ((ptcr & PTCR_PATS) > 24)
+        return H_PARAMETER;
+
+    env->spr[SPR_PTCR] = ptcr;
+    return H_SUCCESS;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -1934,6 +1953,9 @@ static void hypercall_register_types(void)
 
     spapr_register_hypercall(KVMPPC_H_UPDATE_DT, h_update_dt);
 
+    /* Platform-specific hcalls used for nested HV KVM */
+    spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
+
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
                              h_home_node_associativity);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 4251215908..e591ee0ba0 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -501,7 +501,9 @@ struct SpaprMachineState {
 /* Client Architecture support */
 #define KVMPPC_H_CAS            (KVMPPC_HCALL_BASE + 0x2)
 #define KVMPPC_H_UPDATE_DT      (KVMPPC_HCALL_BASE + 0x3)
-#define KVMPPC_HCALL_MAX        KVMPPC_H_UPDATE_DT
+/* Platform-specific hcalls used for nested HV KVM */
+#define H_SET_PARTITION_TABLE   0xF800
+#define KVMPPC_HCALL_MAX        H_SET_PARTITION_TABLE
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 08/13] target/ppc: Implement hcall H_SET_PARTITION_TABLE
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The hcall H_SET_PARTITION_TABLE is used by a guest acting as a nested
hypervisor to register the partition table entry for one of its guests
with the real hypervisor.

Implement this hcall for a spapr guest.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c   | 22 ++++++++++++++++++++++
 include/hw/ppc/spapr.h |  4 +++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 4d7fe337a1..704ceff8e1 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1828,6 +1828,25 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, SpaprMachineState *spapr,
     return H_SUCCESS;
 }
 
+static target_ulong h_set_partition_table(PowerPCCPU *cpu,
+                                          SpaprMachineState *spapr,
+                                          target_ulong opcode,
+                                          target_ulong *args)
+{
+    CPUPPCState *env = &cpu->env;
+    target_ulong ptcr = args[0];
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    if ((ptcr & PTCR_PATS) > 24)
+        return H_PARAMETER;
+
+    env->spr[SPR_PTCR] = ptcr;
+    return H_SUCCESS;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -1934,6 +1953,9 @@ static void hypercall_register_types(void)
 
     spapr_register_hypercall(KVMPPC_H_UPDATE_DT, h_update_dt);
 
+    /* Platform-specific hcalls used for nested HV KVM */
+    spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
+
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
                              h_home_node_associativity);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 4251215908..e591ee0ba0 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -501,7 +501,9 @@ struct SpaprMachineState {
 /* Client Architecture support */
 #define KVMPPC_H_CAS            (KVMPPC_HCALL_BASE + 0x2)
 #define KVMPPC_H_UPDATE_DT      (KVMPPC_HCALL_BASE + 0x3)
-#define KVMPPC_HCALL_MAX        KVMPPC_H_UPDATE_DT
+/* Platform-specific hcalls used for nested HV KVM */
+#define H_SET_PARTITION_TABLE   0xF800
+#define KVMPPC_HCALL_MAX        H_SET_PARTITION_TABLE
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 09/13] target/ppc: Implement hcall H_ENTER_NESTED
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The hcall H_ENTER_NESTED is used by a guest acting as a nested
hypervisor to provide the state of one of its guests which it would
like the real hypervisor to load onto the cpu and execute on its behalf.

The hcall takes as arguments 2 guest real addresses which provide the
location of a regs struct and a hypervisor regs struct which provide the
values to use to execute the guest. These are loaded into the cpu state
and then the function returns to continue tcg execution in the new
context. When an interrupt requires us to context switch back we restore
the old register values and save the cpu state back into the guest
memory.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c     | 285 +++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h   |   3 +-
 target/ppc/cpu.h         |  55 +++++++++
 target/ppc/excp_helper.c |  13 ++-
 4 files changed, 353 insertions(+), 3 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 704ceff8e1..68f3282214 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -16,6 +16,7 @@
 #include "hw/ppc/spapr_ovec.h"
 #include "mmu-book3s-v3.h"
 #include "hw/mem/memory-device.h"
+#include "hw/ppc/ppc.h"
 
 static bool has_spr(PowerPCCPU *cpu, int spr)
 {
@@ -1847,6 +1848,289 @@ static target_ulong h_set_partition_table(PowerPCCPU *cpu,
     return H_SUCCESS;
 }
 
+static void byteswap_pt_regs(struct pt_regs *regs)
+{
+    target_ulong *addr = (target_ulong *) regs;
+
+    for (; addr < ((target_ulong *) (regs + 1)); addr++) {
+        *addr = bswap64(*addr);
+    }
+}
+
+static void byteswap_hv_regs(struct hv_guest_state *hr)
+{
+    hr->version = bswap64(hr->version);
+    hr->lpid = bswap32(hr->lpid);
+    hr->vcpu_token = bswap32(hr->vcpu_token);
+    hr->lpcr = bswap64(hr->lpcr);
+    hr->pcr = bswap64(hr->pcr);
+    hr->amor = bswap64(hr->amor);
+    hr->dpdes = bswap64(hr->dpdes);
+    hr->hfscr = bswap64(hr->hfscr);
+    hr->tb_offset = bswap64(hr->tb_offset);
+    hr->dawr0 = bswap64(hr->dawr0);
+    hr->dawrx0 = bswap64(hr->dawrx0);
+    hr->ciabr = bswap64(hr->ciabr);
+    hr->hdec_expiry = bswap64(hr->hdec_expiry);
+    hr->purr = bswap64(hr->purr);
+    hr->spurr = bswap64(hr->spurr);
+    hr->ic = bswap64(hr->ic);
+    hr->vtb = bswap64(hr->vtb);
+    hr->hdar = bswap64(hr->hdar);
+    hr->hdsisr = bswap64(hr->hdsisr);
+    hr->heir = bswap64(hr->heir);
+    hr->asdr = bswap64(hr->asdr);
+    hr->srr0 = bswap64(hr->srr0);
+    hr->srr1 = bswap64(hr->srr1);
+    hr->sprg[0] = bswap64(hr->sprg[0]);
+    hr->sprg[1] = bswap64(hr->sprg[1]);
+    hr->sprg[2] = bswap64(hr->sprg[2]);
+    hr->sprg[3] = bswap64(hr->sprg[3]);
+    hr->pidr = bswap64(hr->pidr);
+    hr->cfar = bswap64(hr->cfar);
+    hr->ppr = bswap64(hr->ppr);
+}
+
+static void save_regs(PowerPCCPU *cpu, struct pt_regs *regs)
+{
+    CPUPPCState env = cpu->env;
+    int i;
+
+    for (i = 0; i < 32; i++)
+        regs->gpr[i] = env.gpr[i];
+    regs->nip = env.nip;
+    regs->msr = env.msr;
+    regs->ctr = env.ctr;
+    regs->link = env.lr;
+    regs->xer = env.xer;
+    regs->ccr = 0UL;
+    for (i = 0; i < 8; i++)
+        regs->ccr |= ((env.crf[i] & 0xF) << ((7 - i) * 4));
+    regs->dar = env.spr[SPR_DAR];
+    regs->dsisr = env.spr[SPR_DSISR];
+}
+
+static void save_hv_regs(PowerPCCPU *cpu, struct hv_guest_state *hv_regs)
+{
+    CPUPPCState env = cpu->env;
+
+    hv_regs->lpid = env.spr[SPR_LPIDR];
+    hv_regs->lpcr = env.spr[SPR_LPCR];
+    hv_regs->pcr = env.spr[SPR_PCR];
+    hv_regs->amor = env.spr[SPR_AMOR];
+    hv_regs->dpdes = !!(env.pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL));
+    hv_regs->hfscr = env.spr[SPR_HFSCR];
+    hv_regs->tb_offset = env.tb_env->tb_offset;
+    hv_regs->dawr0 = env.spr[SPR_DAWR];
+    hv_regs->dawrx0 = env.spr[SPR_DAWRX];
+    hv_regs->ciabr = env.spr[SPR_CIABR];
+    hv_regs->purr = cpu_ppc_load_purr(&env);
+    hv_regs->spurr = cpu_ppc_load_purr(&env);
+    hv_regs->ic = env.spr[SPR_IC];
+    hv_regs->vtb = cpu_ppc_load_vtb(&env);
+    hv_regs->hdar = env.spr[SPR_HDAR];
+    hv_regs->hdsisr = env.spr[SPR_HDSISR];
+    hv_regs->asdr = env.spr[SPR_ASDR];
+    hv_regs->srr0 = env.spr[SPR_SRR0];
+    hv_regs->srr1 = env.spr[SPR_SRR1];
+    hv_regs->sprg[0] = env.spr[SPR_SPRG0];
+    hv_regs->sprg[1] = env.spr[SPR_SPRG1];
+    hv_regs->sprg[2] = env.spr[SPR_SPRG2];
+    hv_regs->sprg[3] = env.spr[SPR_SPRG3];
+    hv_regs->pidr = env.spr[SPR_BOOKS_PID];
+    hv_regs->cfar = env.cfar;
+    hv_regs->ppr = env.spr[SPR_PPR];
+}
+
+static void restore_regs(PowerPCCPU *cpu, struct pt_regs regs)
+{
+    CPUPPCState *env = &cpu->env;
+    int i;
+
+    for (i = 0; i < 32; i++)
+        env->gpr[i] = regs.gpr[i];
+    env->nip = regs.nip;
+    ppc_store_msr(env, regs.msr);
+    env->ctr = regs.ctr;
+    env->lr = regs.link;
+    env->xer = regs.xer;
+    for (i = 0; i < 8; i++)
+        env->crf[i] = (regs.ccr >> ((7 - i) * 4)) & 0xF;
+    env->spr[SPR_DAR] = regs.dar;
+    env->spr[SPR_DSISR] = regs.dsisr;
+}
+
+static void restore_hv_regs(PowerPCCPU *cpu, struct hv_guest_state hv_regs)
+{
+    CPUPPCState *env = &cpu->env;
+    target_ulong lpcr_mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD
+                                       | LPCR_LPES0 | LPCR_LPES1 | LPCR_MER;
+
+    env->spr[SPR_LPIDR] = hv_regs.lpid;
+    ppc_store_lpcr(cpu, (hv_regs.lpcr & lpcr_mask) |
+                        (env->spr[SPR_LPCR] & ~lpcr_mask));
+    env->spr[SPR_PCR] = hv_regs.pcr;
+    env->spr[SPR_AMOR] = hv_regs.amor;
+    if (hv_regs.dpdes) {
+        env->pending_interrupts |= 1 << PPC_INTERRUPT_DOORBELL;
+        cpu_interrupt(CPU(cpu), CPU_INTERRUPT_HARD);
+    } else {
+        env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
+    }
+    env->spr[SPR_HFSCR] = hv_regs.hfscr;
+    env->spr[SPR_DAWR] = hv_regs.dawr0;
+    env->spr[SPR_DAWRX] = hv_regs.dawrx0;
+    env->spr[SPR_CIABR] = hv_regs.ciabr;
+    cpu_ppc_store_purr(env, hv_regs.purr);      /* for TCG PURR == SPURR */
+    env->spr[SPR_IC] = hv_regs.ic;
+    cpu_ppc_store_vtb(env, hv_regs.vtb);
+    env->spr[SPR_HDAR] = hv_regs.hdar;
+    env->spr[SPR_HDSISR] = hv_regs.hdsisr;
+    env->spr[SPR_ASDR] = hv_regs.asdr;
+    env->spr[SPR_SRR0] = hv_regs.srr0;
+    env->spr[SPR_SRR1] = hv_regs.srr1;
+    env->spr[SPR_SPRG0] = hv_regs.sprg[0];
+    env->spr[SPR_SPRG1] = hv_regs.sprg[1];
+    env->spr[SPR_SPRG2] = hv_regs.sprg[2];
+    env->spr[SPR_SPRG3] = hv_regs.sprg[3];
+    env->spr[SPR_BOOKS_PID] = hv_regs.pidr;
+    env->cfar = hv_regs.cfar;
+    env->spr[SPR_PPR] = hv_regs.ppr;
+    tlb_flush(CPU(cpu));
+}
+
+static void sanitise_hv_regs(PowerPCCPU *cpu, struct hv_guest_state *hv_regs)
+{
+    CPUPPCState env = cpu->env;
+
+    /* Apply more restrictive set of facilities */
+    hv_regs->hfscr &= ((0xFFUL << 56) | env.spr[SPR_HFSCR]);
+
+    /* Don't match on hypervisor address */
+    hv_regs->dawrx0 &= ~(1UL << 2);
+
+    /* Don't match on hypervisor address */
+    if ((hv_regs->ciabr & 0x3) == 0x3)
+        hv_regs->ciabr &= ~0x3UL;
+}
+
+static inline bool needs_byteswap(const CPUPPCState *env)
+{
+#if defined(HOST_WORDS_BIGENDIAN)
+    return msr_le;
+#else
+    return !msr_le;
+#endif
+}
+
+static target_ulong h_enter_nested(PowerPCCPU *cpu, SpaprMachineState *spapr,
+                                   target_ulong opcode, target_ulong *args)
+{
+    CPUPPCState *env = &cpu->env;
+    env->hv_ptr = args[0];
+    env->regs_ptr = args[1];
+    uint64_t hdec;
+
+    assert(env->spr[SPR_LPIDR] == 0);
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    if (!env->has_hv_mode || !ppc_check_compat(cpu, CPU_POWERPC_LOGICAL_3_00, 0,
+                                               spapr->max_compat_pvr)
+                          || !ppc64_v3_radix(cpu)) {
+        error_report("pseries guest support only implemented for POWER9 radix\n");
+        return H_HARDWARE;
+    }
+
+    if (!env->spr[SPR_PTCR])
+        return H_NOT_AVAILABLE;
+
+    memset(&env->l1_saved_hv, 0, sizeof(env->l1_saved_hv));
+    memset(&env->l1_saved_regs, 0, sizeof(env->l1_saved_regs));
+
+    /* load l2 state from l1 memory */
+    cpu_physical_memory_read(env->hv_ptr, &env->l2_hv, sizeof(env->l2_hv));
+    if (needs_byteswap(env)) {
+        byteswap_hv_regs(&env->l2_hv);
+    }
+    if (env->l2_hv.version != 1)
+        return H_P2;
+    if (env->l2_hv.lpid == 0)
+        return H_P2;
+    if (!(env->l2_hv.lpcr & LPCR_HR)) {
+        error_report("pseries guest support only implemented for POWER9 radix guests\n");
+        return H_P2;
+    }
+
+    cpu_physical_memory_read(env->regs_ptr, &env->l2_regs, sizeof(env->l2_regs));
+    if (needs_byteswap(env)) {
+        byteswap_pt_regs(&env->l2_regs);
+    }
+
+    /* save l1 values of things */
+    save_regs(cpu, &env->l1_saved_regs);
+    save_hv_regs(cpu, &env->l1_saved_hv);
+
+    /* adjust for timebase */
+    hdec = env->l2_hv.hdec_expiry - cpu_ppc_load_tbl(env);
+    env->tb_env->tb_offset += env->l2_hv.tb_offset;
+    /* load l2 values of things */
+    sanitise_hv_regs(cpu, &env->l2_hv);
+    restore_regs(cpu, env->l2_regs);
+    env->msr &= ~MSR_HVB;
+    restore_hv_regs(cpu, env->l2_hv);
+    cpu_ppc_store_hdecr(env, hdec);
+
+    assert(env->spr[SPR_LPIDR] != 0);
+
+    return env->gpr[3];
+}
+
+void h_exit_nested(PowerPCCPU *cpu)
+{
+    CPUPPCState *env = &cpu->env;
+    uint64_t delta_purr, delta_ic, delta_vtb;
+    target_ulong trap = env->nip;
+
+    assert(env->spr[SPR_LPIDR] != 0);
+
+    /* save l2 values of things */
+    if (trap == 0x100 || trap == 0x200 || trap == 0xc00) {
+        env->nip = env->spr[SPR_SRR0];
+        env->msr = env->spr[SPR_SRR1];
+    } else {
+        env->nip = env->spr[SPR_HSRR0];
+        env->msr = env->spr[SPR_HSRR1];
+    }
+    save_regs(cpu, &env->l2_regs);
+    delta_purr = cpu_ppc_load_purr(env) - env->l2_hv.purr;
+    delta_ic = env->spr[SPR_IC] - env->l2_hv.ic;
+    delta_vtb = cpu_ppc_load_vtb(env) - env->l2_hv.vtb;
+    save_hv_regs(cpu, &env->l2_hv);
+
+    /* restore l1 state */
+    restore_regs(cpu, env->l1_saved_regs);
+    env->tb_env->tb_offset = env->l1_saved_hv.tb_offset;
+    env->l1_saved_hv.purr += delta_purr;
+    env->l1_saved_hv.ic += delta_ic;
+    env->l1_saved_hv.vtb += delta_vtb;
+    restore_hv_regs(cpu, env->l1_saved_hv);
+
+    /* save l2 state back to l1 memory */
+    if (needs_byteswap(env)) {
+        byteswap_hv_regs(&env->l2_hv);
+        byteswap_pt_regs(&env->l2_regs);
+    }
+    cpu_physical_memory_write(env->hv_ptr, &env->l2_hv, sizeof(env->l2_hv));
+    cpu_physical_memory_write(env->regs_ptr, &env->l2_regs, sizeof(env->l2_regs));
+
+    assert(env->spr[SPR_LPIDR] == 0);
+
+    env->gpr[3] = trap;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -1955,6 +2239,7 @@ static void hypercall_register_types(void)
 
     /* Platform-specific hcalls used for nested HV KVM */
     spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
+    spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
 
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index e591ee0ba0..7083dea9ef 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -503,7 +503,8 @@ struct SpaprMachineState {
 #define KVMPPC_H_UPDATE_DT      (KVMPPC_HCALL_BASE + 0x3)
 /* Platform-specific hcalls used for nested HV KVM */
 #define H_SET_PARTITION_TABLE   0xF800
-#define KVMPPC_HCALL_MAX        H_SET_PARTITION_TABLE
+#define H_ENTER_NESTED          0xF804
+#define KVMPPC_HCALL_MAX        H_ENTER_NESTED
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 3acc248f40..426015c9cd 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -982,6 +982,54 @@ struct ppc_radix_page_info {
 #define PPC_CPU_OPCODES_LEN          0x40
 #define PPC_CPU_INDIRECT_OPCODES_LEN 0x20
 
+struct pt_regs {
+    target_ulong gpr[32];
+    target_ulong nip;
+    target_ulong msr;
+    target_ulong orig_gpr3;
+    target_ulong ctr;
+    target_ulong link;
+    target_ulong xer;
+    target_ulong ccr;
+    target_ulong softe;
+    target_ulong trap;
+    target_ulong dar;
+    target_ulong dsisr;
+    target_ulong result;
+};
+
+struct hv_guest_state {
+    uint64_t version;            /* version of this structure layout */
+    uint32_t lpid;
+    uint32_t vcpu_token;
+    /* These registers are hypervisor privileged (at least for writing) */
+    uint64_t lpcr;
+    uint64_t pcr;
+    uint64_t amor;
+    uint64_t dpdes;
+    uint64_t hfscr;
+    int64_t  tb_offset;
+    uint64_t dawr0;
+    uint64_t dawrx0;
+    uint64_t ciabr;
+    uint64_t hdec_expiry;
+    uint64_t purr;
+    uint64_t spurr;
+    uint64_t ic;
+    uint64_t vtb;
+    uint64_t hdar;
+    uint64_t hdsisr;
+    uint64_t heir;
+    uint64_t asdr;
+    /* These are OS privileged but need to be set late in guest entry */
+    uint64_t srr0;
+    uint64_t srr1;
+    uint64_t sprg[4];
+    uint64_t pidr;
+    uint64_t cfar;
+    uint64_t ppr;
+};
+
 struct CPUPPCState {
     /* First are the most commonly used resources
      * during translated code execution
@@ -1184,6 +1232,11 @@ struct CPUPPCState {
     uint32_t tm_vscr;
     uint64_t tm_dscr;
     uint64_t tm_tar;
+
+    /* used to store register state when running a nested kvm guest */
+    target_ulong hv_ptr, regs_ptr;
+    struct hv_guest_state l2_hv, l1_saved_hv;
+    struct pt_regs l2_regs, l1_saved_regs;
 };
 
 #define SET_FIT_PERIOD(a_, b_, c_, d_)          \
@@ -2647,4 +2700,6 @@ static inline ppc_avr_t *cpu_avr_ptr(CPUPPCState *env, int i)
 void dump_mmu(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env);
 
 void ppc_maybe_bswap_register(CPUPPCState *env, uint8_t *mem_buf, int len);
+
+void h_exit_nested(PowerPCCPU *cpu);
 #endif /* PPC_CPU_H */
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 10091d4624..9470c02512 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -347,7 +347,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
         env->nip += 4;
 
         /* "PAPR mode" built-in hypercall emulation */
-        if ((lev == 1) && cpu->vhyp) {
+        if ((lev == 1) && (cpu->vhyp && (env->spr[SPR_LPIDR] == 0))) {
             PPCVirtualHypervisorClass *vhc =
                 PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
             vhc->hypercall(cpu->vhyp, cpu);
@@ -664,7 +664,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     env->spr[srr1] = msr;
 
     /* Sanity check */
-    if (!(env->msr_mask & MSR_HVB)) {
+    if (!(env->msr_mask & MSR_HVB) && (env->spr[SPR_LPIDR] == 0)) {
         if (new_msr & MSR_HVB) {
             cpu_abort(cs, "Trying to deliver HV exception (MSR) %d with "
                       "no HV support\n", excp);
@@ -770,6 +770,15 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     /* Reset the reservation */
     env->reserve_addr = -1;
 
+    if ((!(env->msr_mask & MSR_HVB) && (new_msr & MSR_HVB))) {
+        /*
+         * We were in a guest, but this interrupt is setting the MSR[HV] bit
+         * meaning we want to handle this at l1. Call h_exit_nested to context
+         * switch back.
+         */
+        h_exit_nested(cpu);
+    }
+
     /* Any interrupt is context synchronizing, check if TCG TLB
      * needs a delayed flush on ppc64
      */
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 09/13] target/ppc: Implement hcall H_ENTER_NESTED
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The hcall H_ENTER_NESTED is used by a guest acting as a nested
hypervisor to provide the state of one of its guests which it would
like the real hypervisor to load onto the cpu and execute on its behalf.

The hcall takes as arguments 2 guest real addresses which provide the
location of a regs struct and a hypervisor regs struct which provide the
values to use to execute the guest. These are loaded into the cpu state
and then the function returns to continue tcg execution in the new
context. When an interrupt requires us to context switch back we restore
the old register values and save the cpu state back into the guest
memory.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c     | 285 +++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h   |   3 +-
 target/ppc/cpu.h         |  55 +++++++++
 target/ppc/excp_helper.c |  13 ++-
 4 files changed, 353 insertions(+), 3 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 704ceff8e1..68f3282214 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -16,6 +16,7 @@
 #include "hw/ppc/spapr_ovec.h"
 #include "mmu-book3s-v3.h"
 #include "hw/mem/memory-device.h"
+#include "hw/ppc/ppc.h"
 
 static bool has_spr(PowerPCCPU *cpu, int spr)
 {
@@ -1847,6 +1848,289 @@ static target_ulong h_set_partition_table(PowerPCCPU *cpu,
     return H_SUCCESS;
 }
 
+static void byteswap_pt_regs(struct pt_regs *regs)
+{
+    target_ulong *addr = (target_ulong *) regs;
+
+    for (; addr < ((target_ulong *) (regs + 1)); addr++) {
+        *addr = bswap64(*addr);
+    }
+}
+
+static void byteswap_hv_regs(struct hv_guest_state *hr)
+{
+    hr->version = bswap64(hr->version);
+    hr->lpid = bswap32(hr->lpid);
+    hr->vcpu_token = bswap32(hr->vcpu_token);
+    hr->lpcr = bswap64(hr->lpcr);
+    hr->pcr = bswap64(hr->pcr);
+    hr->amor = bswap64(hr->amor);
+    hr->dpdes = bswap64(hr->dpdes);
+    hr->hfscr = bswap64(hr->hfscr);
+    hr->tb_offset = bswap64(hr->tb_offset);
+    hr->dawr0 = bswap64(hr->dawr0);
+    hr->dawrx0 = bswap64(hr->dawrx0);
+    hr->ciabr = bswap64(hr->ciabr);
+    hr->hdec_expiry = bswap64(hr->hdec_expiry);
+    hr->purr = bswap64(hr->purr);
+    hr->spurr = bswap64(hr->spurr);
+    hr->ic = bswap64(hr->ic);
+    hr->vtb = bswap64(hr->vtb);
+    hr->hdar = bswap64(hr->hdar);
+    hr->hdsisr = bswap64(hr->hdsisr);
+    hr->heir = bswap64(hr->heir);
+    hr->asdr = bswap64(hr->asdr);
+    hr->srr0 = bswap64(hr->srr0);
+    hr->srr1 = bswap64(hr->srr1);
+    hr->sprg[0] = bswap64(hr->sprg[0]);
+    hr->sprg[1] = bswap64(hr->sprg[1]);
+    hr->sprg[2] = bswap64(hr->sprg[2]);
+    hr->sprg[3] = bswap64(hr->sprg[3]);
+    hr->pidr = bswap64(hr->pidr);
+    hr->cfar = bswap64(hr->cfar);
+    hr->ppr = bswap64(hr->ppr);
+}
+
+static void save_regs(PowerPCCPU *cpu, struct pt_regs *regs)
+{
+    CPUPPCState env = cpu->env;
+    int i;
+
+    for (i = 0; i < 32; i++)
+        regs->gpr[i] = env.gpr[i];
+    regs->nip = env.nip;
+    regs->msr = env.msr;
+    regs->ctr = env.ctr;
+    regs->link = env.lr;
+    regs->xer = env.xer;
+    regs->ccr = 0UL;
+    for (i = 0; i < 8; i++)
+        regs->ccr |= ((env.crf[i] & 0xF) << ((7 - i) * 4));
+    regs->dar = env.spr[SPR_DAR];
+    regs->dsisr = env.spr[SPR_DSISR];
+}
+
+static void save_hv_regs(PowerPCCPU *cpu, struct hv_guest_state *hv_regs)
+{
+    CPUPPCState env = cpu->env;
+
+    hv_regs->lpid = env.spr[SPR_LPIDR];
+    hv_regs->lpcr = env.spr[SPR_LPCR];
+    hv_regs->pcr = env.spr[SPR_PCR];
+    hv_regs->amor = env.spr[SPR_AMOR];
+    hv_regs->dpdes = !!(env.pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL));
+    hv_regs->hfscr = env.spr[SPR_HFSCR];
+    hv_regs->tb_offset = env.tb_env->tb_offset;
+    hv_regs->dawr0 = env.spr[SPR_DAWR];
+    hv_regs->dawrx0 = env.spr[SPR_DAWRX];
+    hv_regs->ciabr = env.spr[SPR_CIABR];
+    hv_regs->purr = cpu_ppc_load_purr(&env);
+    hv_regs->spurr = cpu_ppc_load_purr(&env);
+    hv_regs->ic = env.spr[SPR_IC];
+    hv_regs->vtb = cpu_ppc_load_vtb(&env);
+    hv_regs->hdar = env.spr[SPR_HDAR];
+    hv_regs->hdsisr = env.spr[SPR_HDSISR];
+    hv_regs->asdr = env.spr[SPR_ASDR];
+    hv_regs->srr0 = env.spr[SPR_SRR0];
+    hv_regs->srr1 = env.spr[SPR_SRR1];
+    hv_regs->sprg[0] = env.spr[SPR_SPRG0];
+    hv_regs->sprg[1] = env.spr[SPR_SPRG1];
+    hv_regs->sprg[2] = env.spr[SPR_SPRG2];
+    hv_regs->sprg[3] = env.spr[SPR_SPRG3];
+    hv_regs->pidr = env.spr[SPR_BOOKS_PID];
+    hv_regs->cfar = env.cfar;
+    hv_regs->ppr = env.spr[SPR_PPR];
+}
+
+static void restore_regs(PowerPCCPU *cpu, struct pt_regs regs)
+{
+    CPUPPCState *env = &cpu->env;
+    int i;
+
+    for (i = 0; i < 32; i++)
+        env->gpr[i] = regs.gpr[i];
+    env->nip = regs.nip;
+    ppc_store_msr(env, regs.msr);
+    env->ctr = regs.ctr;
+    env->lr = regs.link;
+    env->xer = regs.xer;
+    for (i = 0; i < 8; i++)
+        env->crf[i] = (regs.ccr >> ((7 - i) * 4)) & 0xF;
+    env->spr[SPR_DAR] = regs.dar;
+    env->spr[SPR_DSISR] = regs.dsisr;
+}
+
+static void restore_hv_regs(PowerPCCPU *cpu, struct hv_guest_state hv_regs)
+{
+    CPUPPCState *env = &cpu->env;
+    target_ulong lpcr_mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD
+                                       | LPCR_LPES0 | LPCR_LPES1 | LPCR_MER;
+
+    env->spr[SPR_LPIDR] = hv_regs.lpid;
+    ppc_store_lpcr(cpu, (hv_regs.lpcr & lpcr_mask) |
+                        (env->spr[SPR_LPCR] & ~lpcr_mask));
+    env->spr[SPR_PCR] = hv_regs.pcr;
+    env->spr[SPR_AMOR] = hv_regs.amor;
+    if (hv_regs.dpdes) {
+        env->pending_interrupts |= 1 << PPC_INTERRUPT_DOORBELL;
+        cpu_interrupt(CPU(cpu), CPU_INTERRUPT_HARD);
+    } else {
+        env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
+    }
+    env->spr[SPR_HFSCR] = hv_regs.hfscr;
+    env->spr[SPR_DAWR] = hv_regs.dawr0;
+    env->spr[SPR_DAWRX] = hv_regs.dawrx0;
+    env->spr[SPR_CIABR] = hv_regs.ciabr;
+    cpu_ppc_store_purr(env, hv_regs.purr);      /* for TCG PURR == SPURR */
+    env->spr[SPR_IC] = hv_regs.ic;
+    cpu_ppc_store_vtb(env, hv_regs.vtb);
+    env->spr[SPR_HDAR] = hv_regs.hdar;
+    env->spr[SPR_HDSISR] = hv_regs.hdsisr;
+    env->spr[SPR_ASDR] = hv_regs.asdr;
+    env->spr[SPR_SRR0] = hv_regs.srr0;
+    env->spr[SPR_SRR1] = hv_regs.srr1;
+    env->spr[SPR_SPRG0] = hv_regs.sprg[0];
+    env->spr[SPR_SPRG1] = hv_regs.sprg[1];
+    env->spr[SPR_SPRG2] = hv_regs.sprg[2];
+    env->spr[SPR_SPRG3] = hv_regs.sprg[3];
+    env->spr[SPR_BOOKS_PID] = hv_regs.pidr;
+    env->cfar = hv_regs.cfar;
+    env->spr[SPR_PPR] = hv_regs.ppr;
+    tlb_flush(CPU(cpu));
+}
+
+static void sanitise_hv_regs(PowerPCCPU *cpu, struct hv_guest_state *hv_regs)
+{
+    CPUPPCState env = cpu->env;
+
+    /* Apply more restrictive set of facilities */
+    hv_regs->hfscr &= ((0xFFUL << 56) | env.spr[SPR_HFSCR]);
+
+    /* Don't match on hypervisor address */
+    hv_regs->dawrx0 &= ~(1UL << 2);
+
+    /* Don't match on hypervisor address */
+    if ((hv_regs->ciabr & 0x3) == 0x3)
+        hv_regs->ciabr &= ~0x3UL;
+}
+
+static inline bool needs_byteswap(const CPUPPCState *env)
+{
+#if defined(HOST_WORDS_BIGENDIAN)
+    return msr_le;
+#else
+    return !msr_le;
+#endif
+}
+
+static target_ulong h_enter_nested(PowerPCCPU *cpu, SpaprMachineState *spapr,
+                                   target_ulong opcode, target_ulong *args)
+{
+    CPUPPCState *env = &cpu->env;
+    env->hv_ptr = args[0];
+    env->regs_ptr = args[1];
+    uint64_t hdec;
+
+    assert(env->spr[SPR_LPIDR] == 0);
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    if (!env->has_hv_mode || !ppc_check_compat(cpu, CPU_POWERPC_LOGICAL_3_00, 0,
+                                               spapr->max_compat_pvr)
+                          || !ppc64_v3_radix(cpu)) {
+        error_report("pseries guest support only implemented for POWER9 radix\n");
+        return H_HARDWARE;
+    }
+
+    if (!env->spr[SPR_PTCR])
+        return H_NOT_AVAILABLE;
+
+    memset(&env->l1_saved_hv, 0, sizeof(env->l1_saved_hv));
+    memset(&env->l1_saved_regs, 0, sizeof(env->l1_saved_regs));
+
+    /* load l2 state from l1 memory */
+    cpu_physical_memory_read(env->hv_ptr, &env->l2_hv, sizeof(env->l2_hv));
+    if (needs_byteswap(env)) {
+        byteswap_hv_regs(&env->l2_hv);
+    }
+    if (env->l2_hv.version != 1)
+        return H_P2;
+    if (env->l2_hv.lpid == 0)
+        return H_P2;
+    if (!(env->l2_hv.lpcr & LPCR_HR)) {
+        error_report("pseries guest support only implemented for POWER9 radix guests\n");
+        return H_P2;
+    }
+
+    cpu_physical_memory_read(env->regs_ptr, &env->l2_regs, sizeof(env->l2_regs));
+    if (needs_byteswap(env)) {
+        byteswap_pt_regs(&env->l2_regs);
+    }
+
+    /* save l1 values of things */
+    save_regs(cpu, &env->l1_saved_regs);
+    save_hv_regs(cpu, &env->l1_saved_hv);
+
+    /* adjust for timebase */
+    hdec = env->l2_hv.hdec_expiry - cpu_ppc_load_tbl(env);
+    env->tb_env->tb_offset += env->l2_hv.tb_offset;
+    /* load l2 values of things */
+    sanitise_hv_regs(cpu, &env->l2_hv);
+    restore_regs(cpu, env->l2_regs);
+    env->msr &= ~MSR_HVB;
+    restore_hv_regs(cpu, env->l2_hv);
+    cpu_ppc_store_hdecr(env, hdec);
+
+    assert(env->spr[SPR_LPIDR] != 0);
+
+    return env->gpr[3];
+}
+
+void h_exit_nested(PowerPCCPU *cpu)
+{
+    CPUPPCState *env = &cpu->env;
+    uint64_t delta_purr, delta_ic, delta_vtb;
+    target_ulong trap = env->nip;
+
+    assert(env->spr[SPR_LPIDR] != 0);
+
+    /* save l2 values of things */
+    if (trap == 0x100 || trap == 0x200 || trap == 0xc00) {
+        env->nip = env->spr[SPR_SRR0];
+        env->msr = env->spr[SPR_SRR1];
+    } else {
+        env->nip = env->spr[SPR_HSRR0];
+        env->msr = env->spr[SPR_HSRR1];
+    }
+    save_regs(cpu, &env->l2_regs);
+    delta_purr = cpu_ppc_load_purr(env) - env->l2_hv.purr;
+    delta_ic = env->spr[SPR_IC] - env->l2_hv.ic;
+    delta_vtb = cpu_ppc_load_vtb(env) - env->l2_hv.vtb;
+    save_hv_regs(cpu, &env->l2_hv);
+
+    /* restore l1 state */
+    restore_regs(cpu, env->l1_saved_regs);
+    env->tb_env->tb_offset = env->l1_saved_hv.tb_offset;
+    env->l1_saved_hv.purr += delta_purr;
+    env->l1_saved_hv.ic += delta_ic;
+    env->l1_saved_hv.vtb += delta_vtb;
+    restore_hv_regs(cpu, env->l1_saved_hv);
+
+    /* save l2 state back to l1 memory */
+    if (needs_byteswap(env)) {
+        byteswap_hv_regs(&env->l2_hv);
+        byteswap_pt_regs(&env->l2_regs);
+    }
+    cpu_physical_memory_write(env->hv_ptr, &env->l2_hv, sizeof(env->l2_hv));
+    cpu_physical_memory_write(env->regs_ptr, &env->l2_regs, sizeof(env->l2_regs));
+
+    assert(env->spr[SPR_LPIDR] == 0);
+
+    env->gpr[3] = trap;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -1955,6 +2239,7 @@ static void hypercall_register_types(void)
 
     /* Platform-specific hcalls used for nested HV KVM */
     spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
+    spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
 
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index e591ee0ba0..7083dea9ef 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -503,7 +503,8 @@ struct SpaprMachineState {
 #define KVMPPC_H_UPDATE_DT      (KVMPPC_HCALL_BASE + 0x3)
 /* Platform-specific hcalls used for nested HV KVM */
 #define H_SET_PARTITION_TABLE   0xF800
-#define KVMPPC_HCALL_MAX        H_SET_PARTITION_TABLE
+#define H_ENTER_NESTED          0xF804
+#define KVMPPC_HCALL_MAX        H_ENTER_NESTED
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 3acc248f40..426015c9cd 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -982,6 +982,54 @@ struct ppc_radix_page_info {
 #define PPC_CPU_OPCODES_LEN          0x40
 #define PPC_CPU_INDIRECT_OPCODES_LEN 0x20
 
+struct pt_regs {
+    target_ulong gpr[32];
+    target_ulong nip;
+    target_ulong msr;
+    target_ulong orig_gpr3;
+    target_ulong ctr;
+    target_ulong link;
+    target_ulong xer;
+    target_ulong ccr;
+    target_ulong softe;
+    target_ulong trap;
+    target_ulong dar;
+    target_ulong dsisr;
+    target_ulong result;
+};
+
+struct hv_guest_state {
+    uint64_t version;            /* version of this structure layout */
+    uint32_t lpid;
+    uint32_t vcpu_token;
+    /* These registers are hypervisor privileged (at least for writing) */
+    uint64_t lpcr;
+    uint64_t pcr;
+    uint64_t amor;
+    uint64_t dpdes;
+    uint64_t hfscr;
+    int64_t  tb_offset;
+    uint64_t dawr0;
+    uint64_t dawrx0;
+    uint64_t ciabr;
+    uint64_t hdec_expiry;
+    uint64_t purr;
+    uint64_t spurr;
+    uint64_t ic;
+    uint64_t vtb;
+    uint64_t hdar;
+    uint64_t hdsisr;
+    uint64_t heir;
+    uint64_t asdr;
+    /* These are OS privileged but need to be set late in guest entry */
+    uint64_t srr0;
+    uint64_t srr1;
+    uint64_t sprg[4];
+    uint64_t pidr;
+    uint64_t cfar;
+    uint64_t ppr;
+};
+
 struct CPUPPCState {
     /* First are the most commonly used resources
      * during translated code execution
@@ -1184,6 +1232,11 @@ struct CPUPPCState {
     uint32_t tm_vscr;
     uint64_t tm_dscr;
     uint64_t tm_tar;
+
+    /* used to store register state when running a nested kvm guest */
+    target_ulong hv_ptr, regs_ptr;
+    struct hv_guest_state l2_hv, l1_saved_hv;
+    struct pt_regs l2_regs, l1_saved_regs;
 };
 
 #define SET_FIT_PERIOD(a_, b_, c_, d_)          \
@@ -2647,4 +2700,6 @@ static inline ppc_avr_t *cpu_avr_ptr(CPUPPCState *env, int i)
 void dump_mmu(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env);
 
 void ppc_maybe_bswap_register(CPUPPCState *env, uint8_t *mem_buf, int len);
+
+void h_exit_nested(PowerPCCPU *cpu);
 #endif /* PPC_CPU_H */
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 10091d4624..9470c02512 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -347,7 +347,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
         env->nip += 4;
 
         /* "PAPR mode" built-in hypercall emulation */
-        if ((lev == 1) && cpu->vhyp) {
+        if ((lev == 1) && (cpu->vhyp && (env->spr[SPR_LPIDR] == 0))) {
             PPCVirtualHypervisorClass *vhc =
                 PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
             vhc->hypercall(cpu->vhyp, cpu);
@@ -664,7 +664,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     env->spr[srr1] = msr;
 
     /* Sanity check */
-    if (!(env->msr_mask & MSR_HVB)) {
+    if (!(env->msr_mask & MSR_HVB) && (env->spr[SPR_LPIDR] == 0)) {
         if (new_msr & MSR_HVB) {
             cpu_abort(cs, "Trying to deliver HV exception (MSR) %d with "
                       "no HV support\n", excp);
@@ -770,6 +770,15 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     /* Reset the reservation */
     env->reserve_addr = -1;
 
+    if ((!(env->msr_mask & MSR_HVB) && (new_msr & MSR_HVB))) {
+        /*
+         * We were in a guest, but this interrupt is setting the MSR[HV] bit
+         * meaning we want to handle this at l1. Call h_exit_nested to context
+         * switch back.
+         */
+        h_exit_nested(cpu);
+    }
+
     /* Any interrupt is context synchronizing, check if TCG TLB
      * needs a delayed flush on ppc64
      */
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 10/13] target/ppc: Implement hcall H_TLB_INVALIDATE
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The hcall H_TLB_INVALIDATE is used by a guest acting as a nested
hypervisor to perform partition scoped tlb invalidation since these
instructions are hypervisor privileged.

Check the arguments are valid and then invalidate the entire tlb since
this is about all we can do in tcg.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c   | 28 ++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h |  3 ++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 68f3282214..a84d5e2163 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -2131,6 +2131,33 @@ void h_exit_nested(PowerPCCPU *cpu)
     env->gpr[3] = trap;
 }
 
+static target_ulong h_nested_tlb_invalidate(PowerPCCPU *cpu,
+                                            SpaprMachineState *spapr,
+                                            target_ulong opcode,
+                                            target_ulong *args)
+{
+    target_ulong instr = args[0];
+    target_ulong rbval = args[2];
+    int r, ric, prs, is;
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    ric = (instr >> 18) & 0x3;
+    prs = (instr >> 17) & 0x1;
+    r = (instr >> 16) & 0x1;
+    is = (rbval >> 10) & 0x3;
+
+    if ((!r) || (prs) || (ric == 3) || (is == 1) || ((!is) && (ric == 1 ||
+                                                               ric == 2)))
+        return H_PARAMETER;
+
+    /* Invalidate everything, not much else we can do */
+    cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
+    return H_SUCCESS;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -2240,6 +2267,7 @@ static void hypercall_register_types(void)
     /* Platform-specific hcalls used for nested HV KVM */
     spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
     spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
+    spapr_register_hypercall(H_TLB_INVALIDATE, h_nested_tlb_invalidate);
 
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 7083dea9ef..6a614c445f 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -504,7 +504,8 @@ struct SpaprMachineState {
 /* Platform-specific hcalls used for nested HV KVM */
 #define H_SET_PARTITION_TABLE   0xF800
 #define H_ENTER_NESTED          0xF804
-#define KVMPPC_HCALL_MAX        H_ENTER_NESTED
+#define H_TLB_INVALIDATE        0xF808
+#define KVMPPC_HCALL_MAX        H_TLB_INVALIDATE
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 10/13] target/ppc: Implement hcall H_TLB_INVALIDATE
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The hcall H_TLB_INVALIDATE is used by a guest acting as a nested
hypervisor to perform partition scoped tlb invalidation since these
instructions are hypervisor privileged.

Check the arguments are valid and then invalidate the entire tlb since
this is about all we can do in tcg.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c   | 28 ++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h |  3 ++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 68f3282214..a84d5e2163 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -2131,6 +2131,33 @@ void h_exit_nested(PowerPCCPU *cpu)
     env->gpr[3] = trap;
 }
 
+static target_ulong h_nested_tlb_invalidate(PowerPCCPU *cpu,
+                                            SpaprMachineState *spapr,
+                                            target_ulong opcode,
+                                            target_ulong *args)
+{
+    target_ulong instr = args[0];
+    target_ulong rbval = args[2];
+    int r, ric, prs, is;
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    ric = (instr >> 18) & 0x3;
+    prs = (instr >> 17) & 0x1;
+    r = (instr >> 16) & 0x1;
+    is = (rbval >> 10) & 0x3;
+
+    if ((!r) || (prs) || (ric == 3) || (is == 1) || ((!is) && (ric == 1 ||
+                                                               ric == 2)))
+        return H_PARAMETER;
+
+    /* Invalidate everything, not much else we can do */
+    cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
+    return H_SUCCESS;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -2240,6 +2267,7 @@ static void hypercall_register_types(void)
     /* Platform-specific hcalls used for nested HV KVM */
     spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
     spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
+    spapr_register_hypercall(H_TLB_INVALIDATE, h_nested_tlb_invalidate);
 
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 7083dea9ef..6a614c445f 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -504,7 +504,8 @@ struct SpaprMachineState {
 /* Platform-specific hcalls used for nested HV KVM */
 #define H_SET_PARTITION_TABLE   0xF800
 #define H_ENTER_NESTED          0xF804
-#define KVMPPC_HCALL_MAX        H_ENTER_NESTED
+#define H_TLB_INVALIDATE        0xF808
+#define KVMPPC_HCALL_MAX        H_TLB_INVALIDATE
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 11/13] target/ppc: Implement hcall H_COPY_TOFROM_GUEST
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

The hcall H_COPY_TOFROM_GUEST of used by a guest acting as a nested
hypervisor to access quadrants since quadrant access is hypervisor
privileged.

Translate the guest address to be accessed, map the memory and perform
the access on behalf of the guest. If the parameters are invalid, the
address can't be translated or the memory cannot be mapped then fail
the access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h   |  3 +-
 target/ppc/mmu-radix64.c |  7 ++---
 target/ppc/mmu-radix64.h |  4 +++
 4 files changed, 83 insertions(+), 5 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index a84d5e2163..a370d70500 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -17,6 +17,7 @@
 #include "mmu-book3s-v3.h"
 #include "hw/mem/memory-device.h"
 #include "hw/ppc/ppc.h"
+#include "mmu-radix64.h"
 
 static bool has_spr(PowerPCCPU *cpu, int spr)
 {
@@ -2158,6 +2159,78 @@ static target_ulong h_nested_tlb_invalidate(PowerPCCPU *cpu,
     return H_SUCCESS;
 }
 
+static target_ulong h_copy_tofrom_guest(PowerPCCPU *cpu,
+                                        SpaprMachineState *spapr,
+                                        target_ulong opcode, target_ulong *args)
+{
+    target_ulong lpid = args[0];
+    target_ulong pid = args[1];
+    vaddr eaddr = args[2];
+    target_ulong gp_to = args[3];
+    target_ulong gp_from = args[4];
+    target_ulong n = args[5];
+    int is_load = !!gp_to;
+    void *from, *to;
+    int prot, psize;
+    hwaddr raddr, to_len, from_len;
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    if ((gp_to && gp_from) || (!gp_to && !gp_from)) {
+        return H_PARAMETER;
+    }
+
+    if (eaddr & (0xFFFUL << 52)) {
+        return H_PARAMETER;
+    }
+
+    if (!lpid) {
+        return H_PARAMETER;
+    }
+
+    /* Translate eaddr to raddr */
+    if (ppc_radix64_xlate(cpu, eaddr, is_load, lpid, pid, 1, &raddr, &psize,
+                          &prot, 0)) {
+        return H_NOT_FOUND;
+    }
+    if (((raddr & ((1UL << psize) - 1)) + n) >= (1UL << psize)) {
+        return H_PARAMETER;
+    }
+
+    if (is_load) {
+        gp_from = raddr;
+    } else {
+        gp_to = raddr;
+    }
+
+    /* Map the memory regions and perform a memory copy */
+    from = cpu_physical_memory_map(gp_from, &from_len, 0);
+    if (!from) {
+        return H_NOT_FOUND;
+    }
+    if (from_len < n) {
+        cpu_physical_memory_unmap(from, from_len, 0, 0);
+        return H_PARAMETER;
+    }
+    to = cpu_physical_memory_map(gp_to, &to_len, 1);
+    if (!to) {
+        cpu_physical_memory_unmap(from, from_len, 0, 0);
+        return H_PARAMETER;
+    }
+    if (to_len < n) {
+        cpu_physical_memory_unmap(from, from_len, 0, 0);
+        cpu_physical_memory_unmap(to, to_len, 1, 0);
+        return H_PARAMETER;
+    }
+    memcpy(to, from, n);
+    cpu_physical_memory_unmap(from, from_len, 0, n);
+    cpu_physical_memory_unmap(to, to_len, 1, n);
+
+    return H_SUCCESS;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -2268,6 +2341,7 @@ static void hypercall_register_types(void)
     spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
     spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
     spapr_register_hypercall(H_TLB_INVALIDATE, h_nested_tlb_invalidate);
+    spapr_register_hypercall(H_COPY_TOFROM_GUEST, h_copy_tofrom_guest);
 
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 6a614c445f..d62f4108d4 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -505,7 +505,8 @@ struct SpaprMachineState {
 #define H_SET_PARTITION_TABLE   0xF800
 #define H_ENTER_NESTED          0xF804
 #define H_TLB_INVALIDATE        0xF808
-#define KVMPPC_HCALL_MAX        H_TLB_INVALIDATE
+#define H_COPY_TOFROM_GUEST     0xF80C
+#define KVMPPC_HCALL_MAX        H_COPY_TOFROM_GUEST
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 6118ad1b00..2a8147fc38 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -429,10 +429,9 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     return true;
 }
 
-static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
-                             uint64_t lpid, uint64_t pid, bool relocation,
-                             hwaddr *raddr, int *psizep, int *protp,
-                             bool cause_excp)
+int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx, uint64_t lpid,
+                      uint64_t pid, bool relocation, hwaddr *raddr, int *psizep,
+                      int *protp, bool cause_excp)
 {
     CPUPPCState *env = &cpu->env;
     ppc_v3_pate_t pate;
diff --git a/target/ppc/mmu-radix64.h b/target/ppc/mmu-radix64.h
index 96228546aa..c0bbd5c332 100644
--- a/target/ppc/mmu-radix64.h
+++ b/target/ppc/mmu-radix64.h
@@ -66,6 +66,10 @@ static inline int ppc_radix64_get_prot_amr(PowerPCCPU *cpu)
            (iamr & 0x1 ? 0 : PAGE_EXEC);
 }
 
+int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx, uint64_t lpid,
+                      uint64_t pid, bool relocation, hwaddr *raddr, int *psizep,
+                      int *protp, bool cause_excp);
+
 #endif /* TARGET_PPC64 */
 
 #endif /* CONFIG_USER_ONLY */
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 11/13] target/ppc: Implement hcall H_COPY_TOFROM_GUEST
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

The hcall H_COPY_TOFROM_GUEST of used by a guest acting as a nested
hypervisor to access quadrants since quadrant access is hypervisor
privileged.

Translate the guest address to be accessed, map the memory and perform
the access on behalf of the guest. If the parameters are invalid, the
address can't be translated or the memory cannot be mapped then fail
the access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_hcall.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h   |  3 +-
 target/ppc/mmu-radix64.c |  7 ++---
 target/ppc/mmu-radix64.h |  4 +++
 4 files changed, 83 insertions(+), 5 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index a84d5e2163..a370d70500 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -17,6 +17,7 @@
 #include "mmu-book3s-v3.h"
 #include "hw/mem/memory-device.h"
 #include "hw/ppc/ppc.h"
+#include "mmu-radix64.h"
 
 static bool has_spr(PowerPCCPU *cpu, int spr)
 {
@@ -2158,6 +2159,78 @@ static target_ulong h_nested_tlb_invalidate(PowerPCCPU *cpu,
     return H_SUCCESS;
 }
 
+static target_ulong h_copy_tofrom_guest(PowerPCCPU *cpu,
+                                        SpaprMachineState *spapr,
+                                        target_ulong opcode, target_ulong *args)
+{
+    target_ulong lpid = args[0];
+    target_ulong pid = args[1];
+    vaddr eaddr = args[2];
+    target_ulong gp_to = args[3];
+    target_ulong gp_from = args[4];
+    target_ulong n = args[5];
+    int is_load = !!gp_to;
+    void *from, *to;
+    int prot, psize;
+    hwaddr raddr, to_len, from_len;
+
+    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
+        return H_FUNCTION;
+    }
+
+    if ((gp_to && gp_from) || (!gp_to && !gp_from)) {
+        return H_PARAMETER;
+    }
+
+    if (eaddr & (0xFFFUL << 52)) {
+        return H_PARAMETER;
+    }
+
+    if (!lpid) {
+        return H_PARAMETER;
+    }
+
+    /* Translate eaddr to raddr */
+    if (ppc_radix64_xlate(cpu, eaddr, is_load, lpid, pid, 1, &raddr, &psize,
+                          &prot, 0)) {
+        return H_NOT_FOUND;
+    }
+    if (((raddr & ((1UL << psize) - 1)) + n) >= (1UL << psize)) {
+        return H_PARAMETER;
+    }
+
+    if (is_load) {
+        gp_from = raddr;
+    } else {
+        gp_to = raddr;
+    }
+
+    /* Map the memory regions and perform a memory copy */
+    from = cpu_physical_memory_map(gp_from, &from_len, 0);
+    if (!from) {
+        return H_NOT_FOUND;
+    }
+    if (from_len < n) {
+        cpu_physical_memory_unmap(from, from_len, 0, 0);
+        return H_PARAMETER;
+    }
+    to = cpu_physical_memory_map(gp_to, &to_len, 1);
+    if (!to) {
+        cpu_physical_memory_unmap(from, from_len, 0, 0);
+        return H_PARAMETER;
+    }
+    if (to_len < n) {
+        cpu_physical_memory_unmap(from, from_len, 0, 0);
+        cpu_physical_memory_unmap(to, to_len, 1, 0);
+        return H_PARAMETER;
+    }
+    memcpy(to, from, n);
+    cpu_physical_memory_unmap(from, from_len, 0, n);
+    cpu_physical_memory_unmap(to, to_len, 1, n);
+
+    return H_SUCCESS;
+}
+
 static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
 static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
 
@@ -2268,6 +2341,7 @@ static void hypercall_register_types(void)
     spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
     spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
     spapr_register_hypercall(H_TLB_INVALIDATE, h_nested_tlb_invalidate);
+    spapr_register_hypercall(H_COPY_TOFROM_GUEST, h_copy_tofrom_guest);
 
     /* Virtual Processor Home Node */
     spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 6a614c445f..d62f4108d4 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -505,7 +505,8 @@ struct SpaprMachineState {
 #define H_SET_PARTITION_TABLE   0xF800
 #define H_ENTER_NESTED          0xF804
 #define H_TLB_INVALIDATE        0xF808
-#define KVMPPC_HCALL_MAX        H_TLB_INVALIDATE
+#define H_COPY_TOFROM_GUEST     0xF80C
+#define KVMPPC_HCALL_MAX        H_COPY_TOFROM_GUEST
 
 typedef struct SpaprDeviceTreeUpdateHeader {
     uint32_t version_id;
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 6118ad1b00..2a8147fc38 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -429,10 +429,9 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     return true;
 }
 
-static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
-                             uint64_t lpid, uint64_t pid, bool relocation,
-                             hwaddr *raddr, int *psizep, int *protp,
-                             bool cause_excp)
+int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx, uint64_t lpid,
+                      uint64_t pid, bool relocation, hwaddr *raddr, int *psizep,
+                      int *protp, bool cause_excp)
 {
     CPUPPCState *env = &cpu->env;
     ppc_v3_pate_t pate;
diff --git a/target/ppc/mmu-radix64.h b/target/ppc/mmu-radix64.h
index 96228546aa..c0bbd5c332 100644
--- a/target/ppc/mmu-radix64.h
+++ b/target/ppc/mmu-radix64.h
@@ -66,6 +66,10 @@ static inline int ppc_radix64_get_prot_amr(PowerPCCPU *cpu)
            (iamr & 0x1 ? 0 : PAGE_EXEC);
 }
 
+int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx, uint64_t lpid,
+                      uint64_t pid, bool relocation, hwaddr *raddr, int *psizep,
+                      int *protp, bool cause_excp);
+
 #endif /* TARGET_PPC64 */
 
 #endif /* CONFIG_USER_ONLY */
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 12/13] target/ppc: Introduce POWER9 DD2.2 cpu type
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

Introduce a POWER9 DD2.2 cpu type with pvr 0x004E1202.

A DD2.2 POWER9 cpu type is needed to enable kvm for pseries tcg guests
since it means they will use the H_ENTER_NESTED hcall to run a guest
rather than trying the generic entry path which will fail.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_cpu_core.c | 1 +
 target/ppc/cpu-models.c | 2 ++
 target/ppc/cpu-models.h | 1 +
 3 files changed, 4 insertions(+)

diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index 40e7010cf0..98d46c6edb 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -399,6 +399,7 @@ static const TypeInfo spapr_cpu_core_type_infos[] = {
     DEFINE_SPAPR_CPU_CORE_TYPE("power8nvl_v1.0"),
     DEFINE_SPAPR_CPU_CORE_TYPE("power9_v1.0"),
     DEFINE_SPAPR_CPU_CORE_TYPE("power9_v2.0"),
+    DEFINE_SPAPR_CPU_CORE_TYPE("power9_v2.2"),
 #ifdef CONFIG_KVM
     DEFINE_SPAPR_CPU_CORE_TYPE("host"),
 #endif
diff --git a/target/ppc/cpu-models.c b/target/ppc/cpu-models.c
index 7c75963e3c..603ae7f5b4 100644
--- a/target/ppc/cpu-models.c
+++ b/target/ppc/cpu-models.c
@@ -773,6 +773,8 @@
                 "POWER9 v1.0")
     POWERPC_DEF("power9_v2.0",   CPU_POWERPC_POWER9_DD20,            POWER9,
                 "POWER9 v2.0")
+    POWERPC_DEF("power9_v2.2",   CPU_POWERPC_POWER9_DD22,            POWER9,
+                "POWER9 v2.2")
 #endif /* defined (TARGET_PPC64) */
 
 /***************************************************************************/
diff --git a/target/ppc/cpu-models.h b/target/ppc/cpu-models.h
index efdb2fa53c..820e94b0c8 100644
--- a/target/ppc/cpu-models.h
+++ b/target/ppc/cpu-models.h
@@ -373,6 +373,7 @@ enum {
     CPU_POWERPC_POWER9_BASE        = 0x004E0000,
     CPU_POWERPC_POWER9_DD1         = 0x004E0100,
     CPU_POWERPC_POWER9_DD20        = 0x004E1200,
+    CPU_POWERPC_POWER9_DD22        = 0x004E1202,
     CPU_POWERPC_970_v22            = 0x00390202,
     CPU_POWERPC_970FX_v10          = 0x00391100,
     CPU_POWERPC_970FX_v20          = 0x003C0200,
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 12/13] target/ppc: Introduce POWER9 DD2.2 cpu type
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

Introduce a POWER9 DD2.2 cpu type with pvr 0x004E1202.

A DD2.2 POWER9 cpu type is needed to enable kvm for pseries tcg guests
since it means they will use the H_ENTER_NESTED hcall to run a guest
rather than trying the generic entry path which will fail.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_cpu_core.c | 1 +
 target/ppc/cpu-models.c | 2 ++
 target/ppc/cpu-models.h | 1 +
 3 files changed, 4 insertions(+)

diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index 40e7010cf0..98d46c6edb 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -399,6 +399,7 @@ static const TypeInfo spapr_cpu_core_type_infos[] = {
     DEFINE_SPAPR_CPU_CORE_TYPE("power8nvl_v1.0"),
     DEFINE_SPAPR_CPU_CORE_TYPE("power9_v1.0"),
     DEFINE_SPAPR_CPU_CORE_TYPE("power9_v2.0"),
+    DEFINE_SPAPR_CPU_CORE_TYPE("power9_v2.2"),
 #ifdef CONFIG_KVM
     DEFINE_SPAPR_CPU_CORE_TYPE("host"),
 #endif
diff --git a/target/ppc/cpu-models.c b/target/ppc/cpu-models.c
index 7c75963e3c..603ae7f5b4 100644
--- a/target/ppc/cpu-models.c
+++ b/target/ppc/cpu-models.c
@@ -773,6 +773,8 @@
                 "POWER9 v1.0")
     POWERPC_DEF("power9_v2.0",   CPU_POWERPC_POWER9_DD20,            POWER9,
                 "POWER9 v2.0")
+    POWERPC_DEF("power9_v2.2",   CPU_POWERPC_POWER9_DD22,            POWER9,
+                "POWER9 v2.2")
 #endif /* defined (TARGET_PPC64) */
 
 /***************************************************************************/
diff --git a/target/ppc/cpu-models.h b/target/ppc/cpu-models.h
index efdb2fa53c..820e94b0c8 100644
--- a/target/ppc/cpu-models.h
+++ b/target/ppc/cpu-models.h
@@ -373,6 +373,7 @@ enum {
     CPU_POWERPC_POWER9_BASE        = 0x004E0000,
     CPU_POWERPC_POWER9_DD1         = 0x004E0100,
     CPU_POWERPC_POWER9_DD20        = 0x004E1200,
+    CPU_POWERPC_POWER9_DD22        = 0x004E1202,
     CPU_POWERPC_970_v22            = 0x00390202,
     CPU_POWERPC_970FX_v10          = 0x00391100,
     CPU_POWERPC_970FX_v20          = 0x003C0200,
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 13/13] target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug, Suraj Jitindar Singh

It is now possible to use nested kvm-hv under tcg, thus allow for it to
be enabled.

Note that nested kvm-hv requires that rc updates to ptes be done by
software, otherwise the page tables get out of sync. So disable hardware
rc updates when nested kvm-hv is enabled.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_caps.c      | 22 ++++++++++++++++++----
 target/ppc/cpu.h         |  1 +
 target/ppc/mmu-radix64.c |  4 ++--
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c
index 3278c09b0f..7fe07d83dd 100644
--- a/hw/ppc/spapr_caps.c
+++ b/hw/ppc/spapr_caps.c
@@ -389,10 +389,7 @@ static void cap_nested_kvm_hv_apply(SpaprMachineState *spapr,
         return;
     }
 
-    if (tcg_enabled()) {
-        error_setg(errp,
-                   "No Nested KVM-HV support in tcg, try cap-nested-hv=off");
-    } else if (kvm_enabled()) {
+    if (kvm_enabled()) {
         if (!kvmppc_has_cap_nested_kvm_hv()) {
             error_setg(errp,
 "KVM implementation does not support Nested KVM-HV, try cap-nested-hv=off");
@@ -400,6 +397,22 @@ static void cap_nested_kvm_hv_apply(SpaprMachineState *spapr,
                 error_setg(errp,
 "Error enabling cap-nested-hv with KVM, try cap-nested-hv=off");
         }
+    } /* else { nothing required for tcg } */
+}
+
+static void cap_nested_kvm_hv_cpu_apply(SpaprMachineState *spapr,
+                                        PowerPCCPU *cpu,
+                                        uint8_t val, Error **errp)
+{
+    CPUPPCState *env = &cpu->env;
+
+    if (tcg_enabled() && val) {
+        if (env->spr[SPR_PVR] != 0x004E1202) {
+            error_setg(errp, "Nested KVM-HV only supported on POWER9 DD2.2, "
+                             "try cap-nested-hv=off or -cpu power9_v2.2");
+            return;
+        }
+        env->disable_hw_rc_updates = true;
     }
 }
 
@@ -544,6 +557,7 @@ SpaprCapabilityInfo capability_table[SPAPR_CAP_NUM] = {
         .set = spapr_cap_set_bool,
         .type = "bool",
         .apply = cap_nested_kvm_hv_apply,
+        .cpu_apply = cap_nested_kvm_hv_cpu_apply,
     },
     [SPAPR_CAP_LARGE_DECREMENTER] = {
         .name = "large-decr",
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 426015c9cd..6502e0de82 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1237,6 +1237,7 @@ struct CPUPPCState {
     target_ulong hv_ptr, regs_ptr;
     struct hv_guest_state l2_hv, l1_saved_hv;
     struct pt_regs l2_regs, l1_saved_regs;
+    bool disable_hw_rc_updates;
 };
 
 #define SET_FIT_PERIOD(a_, b_, c_, d_)          \
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 2a8147fc38..cc06967dbe 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -31,9 +31,9 @@
 static inline bool ppc_radix64_hw_rc_updates(CPUPPCState *env)
 {
 #ifdef CONFIG_ATOMIC64
-    return true;
+    return !env->disable_hw_rc_updates;
 #else
-    return !qemu_tcg_mttcg_enabled();
+    return !qemu_tcg_mttcg_enabled() && !env->disable_hw_rc_updates;
 #endif
 }
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Qemu-devel] [QEMU-PPC] [PATCH 13/13] target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
@ 2019-05-03  5:53   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:53 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, Suraj Jitindar Singh, david

It is now possible to use nested kvm-hv under tcg, thus allow for it to
be enabled.

Note that nested kvm-hv requires that rc updates to ptes be done by
software, otherwise the page tables get out of sync. So disable hardware
rc updates when nested kvm-hv is enabled.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
 hw/ppc/spapr_caps.c      | 22 ++++++++++++++++++----
 target/ppc/cpu.h         |  1 +
 target/ppc/mmu-radix64.c |  4 ++--
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c
index 3278c09b0f..7fe07d83dd 100644
--- a/hw/ppc/spapr_caps.c
+++ b/hw/ppc/spapr_caps.c
@@ -389,10 +389,7 @@ static void cap_nested_kvm_hv_apply(SpaprMachineState *spapr,
         return;
     }
 
-    if (tcg_enabled()) {
-        error_setg(errp,
-                   "No Nested KVM-HV support in tcg, try cap-nested-hv=off");
-    } else if (kvm_enabled()) {
+    if (kvm_enabled()) {
         if (!kvmppc_has_cap_nested_kvm_hv()) {
             error_setg(errp,
 "KVM implementation does not support Nested KVM-HV, try cap-nested-hv=off");
@@ -400,6 +397,22 @@ static void cap_nested_kvm_hv_apply(SpaprMachineState *spapr,
                 error_setg(errp,
 "Error enabling cap-nested-hv with KVM, try cap-nested-hv=off");
         }
+    } /* else { nothing required for tcg } */
+}
+
+static void cap_nested_kvm_hv_cpu_apply(SpaprMachineState *spapr,
+                                        PowerPCCPU *cpu,
+                                        uint8_t val, Error **errp)
+{
+    CPUPPCState *env = &cpu->env;
+
+    if (tcg_enabled() && val) {
+        if (env->spr[SPR_PVR] != 0x004E1202) {
+            error_setg(errp, "Nested KVM-HV only supported on POWER9 DD2.2, "
+                             "try cap-nested-hv=off or -cpu power9_v2.2");
+            return;
+        }
+        env->disable_hw_rc_updates = true;
     }
 }
 
@@ -544,6 +557,7 @@ SpaprCapabilityInfo capability_table[SPAPR_CAP_NUM] = {
         .set = spapr_cap_set_bool,
         .type = "bool",
         .apply = cap_nested_kvm_hv_apply,
+        .cpu_apply = cap_nested_kvm_hv_cpu_apply,
     },
     [SPAPR_CAP_LARGE_DECREMENTER] = {
         .name = "large-decr",
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 426015c9cd..6502e0de82 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1237,6 +1237,7 @@ struct CPUPPCState {
     target_ulong hv_ptr, regs_ptr;
     struct hv_guest_state l2_hv, l1_saved_hv;
     struct pt_regs l2_regs, l1_saved_regs;
+    bool disable_hw_rc_updates;
 };
 
 #define SET_FIT_PERIOD(a_, b_, c_, d_)          \
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 2a8147fc38..cc06967dbe 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -31,9 +31,9 @@
 static inline bool ppc_radix64_hw_rc_updates(CPUPPCState *env)
 {
 #ifdef CONFIG_ATOMIC64
-    return true;
+    return !env->disable_hw_rc_updates;
 #else
-    return !qemu_tcg_mttcg_enabled();
+    return !qemu_tcg_mttcg_enabled() && !env->disable_hw_rc_updates;
 #endif
 }
 
-- 
2.13.6



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG
@ 2019-05-03  5:58   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, david, clg, groug

On Fri, 2019-05-03 at 15:53 +1000, Suraj Jitindar Singh wrote:
> This patch series adds the necessary parts so that a tcg guest is
> able to use
> kvm facilities. That is a tcg guest can boot its own kvm guests.
> 
> The main requirements for this were a few registers and instructions
> as well as
> some hcalls and the addition of partition scoped translation in the
> radix mmu
> emulation.
> 
> This can be used to boot a kvm guest under a pseries tcg guest:
> Use power9_v2.2 cpu and add -machine cap-nested-hv=on for the first
> guest.
> Then inside that guest boot a kvm guest as normal.
> This takes advantage of the new hcalls with qemu emulating them as a
> normal
> hypervisor would on a real machine.
> 
> This can also be used to boot a kvm guest under a powernv tcg guest:
> Use any power9 cpu type.
> This takes advantage of the new hv register access added.
> Note that for powernv there is no xive interrupt excalation for KVM
> which means
> that while the guest will boot, it won't receive any interrupts.

Ah so I'm suddenly aware I based this on the wrong tree so it won't
apply to your dwg-ppc-for-4.1 David.

Consider this a RFC and I'll rebase and resend when required.

> 
> Suraj Jitindar Singh (13):
>   target/ppc: Implement the VTB for HV access
>   target/ppc: Work [S]PURR implementation and add HV support
>   target/ppc: Add SPR ASDR
>   target/ppc: Add SPR TBU40
>   target/ppc: Add privileged message send facilities
>   target/ppc: Enforce that the root page directory size must be at
> least
>     5
>   target/ppc: Handle partition scoped radix tree translation
>   target/ppc: Implement hcall H_SET_PARTITION_TABLE
>   target/ppc: Implement hcall H_ENTER_NESTED
>   target/ppc: Implement hcall H_TLB_INVALIDATE
>   target/ppc: Implement hcall H_COPY_TOFROM_GUEST
>   target/ppc: Introduce POWER9 DD2.2 cpu type
>   target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
> 
>  hw/ppc/ppc.c                    |  46 ++++-
>  hw/ppc/spapr_caps.c             |  22 ++-
>  hw/ppc/spapr_cpu_core.c         |   1 +
>  hw/ppc/spapr_hcall.c            | 409
> +++++++++++++++++++++++++++++++++++++++
>  include/hw/ppc/ppc.h            |   4 +-
>  include/hw/ppc/spapr.h          |   7 +-
>  linux-user/ppc/cpu_loop.c       |   5 +
>  target/ppc/cpu-models.c         |   2 +
>  target/ppc/cpu-models.h         |   1 +
>  target/ppc/cpu.h                |  70 +++++++
>  target/ppc/excp_helper.c        |  79 +++++++-
>  target/ppc/helper.h             |   9 +
>  target/ppc/misc_helper.c        |  46 +++++
>  target/ppc/mmu-radix64.c        | 412 ++++++++++++++++++++++++++++
> ------------
>  target/ppc/mmu-radix64.h        |   4 +
>  target/ppc/timebase_helper.c    |  20 ++
>  target/ppc/translate.c          |  28 +++
>  target/ppc/translate_init.inc.c | 107 +++++++++--
>  18 files changed, 1115 insertions(+), 157 deletions(-)
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG
@ 2019-05-03  5:58   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-03  5:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: groug, qemu-ppc, clg, david

On Fri, 2019-05-03 at 15:53 +1000, Suraj Jitindar Singh wrote:
> This patch series adds the necessary parts so that a tcg guest is
> able to use
> kvm facilities. That is a tcg guest can boot its own kvm guests.
> 
> The main requirements for this were a few registers and instructions
> as well as
> some hcalls and the addition of partition scoped translation in the
> radix mmu
> emulation.
> 
> This can be used to boot a kvm guest under a pseries tcg guest:
> Use power9_v2.2 cpu and add -machine cap-nested-hv=on for the first
> guest.
> Then inside that guest boot a kvm guest as normal.
> This takes advantage of the new hcalls with qemu emulating them as a
> normal
> hypervisor would on a real machine.
> 
> This can also be used to boot a kvm guest under a powernv tcg guest:
> Use any power9 cpu type.
> This takes advantage of the new hv register access added.
> Note that for powernv there is no xive interrupt excalation for KVM
> which means
> that while the guest will boot, it won't receive any interrupts.

Ah so I'm suddenly aware I based this on the wrong tree so it won't
apply to your dwg-ppc-for-4.1 David.

Consider this a RFC and I'll rebase and resend when required.

> 
> Suraj Jitindar Singh (13):
>   target/ppc: Implement the VTB for HV access
>   target/ppc: Work [S]PURR implementation and add HV support
>   target/ppc: Add SPR ASDR
>   target/ppc: Add SPR TBU40
>   target/ppc: Add privileged message send facilities
>   target/ppc: Enforce that the root page directory size must be at
> least
>     5
>   target/ppc: Handle partition scoped radix tree translation
>   target/ppc: Implement hcall H_SET_PARTITION_TABLE
>   target/ppc: Implement hcall H_ENTER_NESTED
>   target/ppc: Implement hcall H_TLB_INVALIDATE
>   target/ppc: Implement hcall H_COPY_TOFROM_GUEST
>   target/ppc: Introduce POWER9 DD2.2 cpu type
>   target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
> 
>  hw/ppc/ppc.c                    |  46 ++++-
>  hw/ppc/spapr_caps.c             |  22 ++-
>  hw/ppc/spapr_cpu_core.c         |   1 +
>  hw/ppc/spapr_hcall.c            | 409
> +++++++++++++++++++++++++++++++++++++++
>  include/hw/ppc/ppc.h            |   4 +-
>  include/hw/ppc/spapr.h          |   7 +-
>  linux-user/ppc/cpu_loop.c       |   5 +
>  target/ppc/cpu-models.c         |   2 +
>  target/ppc/cpu-models.h         |   1 +
>  target/ppc/cpu.h                |  70 +++++++
>  target/ppc/excp_helper.c        |  79 +++++++-
>  target/ppc/helper.h             |   9 +
>  target/ppc/misc_helper.c        |  46 +++++
>  target/ppc/mmu-radix64.c        | 412 ++++++++++++++++++++++++++++
> ------------
>  target/ppc/mmu-radix64.h        |   4 +
>  target/ppc/timebase_helper.c    |  20 ++
>  target/ppc/translate.c          |  28 +++
>  target/ppc/translate_init.inc.c | 107 +++++++++--
>  18 files changed, 1115 insertions(+), 157 deletions(-)
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 01/13] target/ppc: Implement the VTB for HV access
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-06  6:02   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-06  6:02 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 7409 bytes --]

On Fri, May 03, 2019 at 03:53:04PM +1000, Suraj Jitindar Singh wrote:
> The virtual timebase register (VTB) is a 64-bit register which
> increments at the same rate as the timebase register, present on POWER8
> and later processors.
> 
> The register is able to be read/written by the hypervisor and read by
> the supervisor. All other accesses are illegal.
> 
> Currently the VTB is just an alias for the timebase (TB) register.
> 
> Implement the VTB so that is can be read/written independent of the TB.
> Make use of the existing method for accessing timebase facilities where
> by the compensation is stored and used to compute the value on reads/is
> updated on writes.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

LGTM, but now conflicts with the ppc-for-4.1 tree.

> ---
>  hw/ppc/ppc.c                    | 16 ++++++++++++++++
>  include/hw/ppc/ppc.h            |  1 +
>  linux-user/ppc/cpu_loop.c       |  5 +++++
>  target/ppc/cpu.h                |  2 ++
>  target/ppc/helper.h             |  2 ++
>  target/ppc/timebase_helper.c    | 10 ++++++++++
>  target/ppc/translate_init.inc.c | 19 +++++++++++++++----
>  7 files changed, 51 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
> index b2ff99ec66..a57ca64626 100644
> --- a/hw/ppc/ppc.c
> +++ b/hw/ppc/ppc.c
> @@ -694,6 +694,22 @@ void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value)
>                       &tb_env->atb_offset, ((uint64_t)value << 32) | tb);
>  }
>  
> +uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
> +{
> +    ppc_tb_t *tb_env = env->tb_env;
> +
> +    return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> +                          tb_env->vtb_offset);
> +}
> +
> +void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
> +{
> +    ppc_tb_t *tb_env = env->tb_env;
> +
> +    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> +                     &tb_env->vtb_offset, value);
> +}
> +
>  static void cpu_ppc_tb_stop (CPUPPCState *env)
>  {
>      ppc_tb_t *tb_env = env->tb_env;
> diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
> index 4bdcb8bacd..205150e6b4 100644
> --- a/include/hw/ppc/ppc.h
> +++ b/include/hw/ppc/ppc.h
> @@ -23,6 +23,7 @@ struct ppc_tb_t {
>      /* Time base management */
>      int64_t  tb_offset;    /* Compensation                    */
>      int64_t  atb_offset;   /* Compensation                    */
> +    int64_t  vtb_offset;
>      uint32_t tb_freq;      /* TB frequency                    */
>      /* Decrementer management */
>      uint64_t decr_next;    /* Tick for next decr interrupt    */
> diff --git a/linux-user/ppc/cpu_loop.c b/linux-user/ppc/cpu_loop.c
> index 801f5ace29..c715861804 100644
> --- a/linux-user/ppc/cpu_loop.c
> +++ b/linux-user/ppc/cpu_loop.c
> @@ -46,6 +46,11 @@ uint32_t cpu_ppc_load_atbu(CPUPPCState *env)
>      return cpu_ppc_get_tb(env) >> 32;
>  }
>  
> +uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
> +{
> +    return cpu_ppc_get_tb(env);
> +}
> +
>  uint32_t cpu_ppc601_load_rtcu(CPUPPCState *env)
>  __attribute__ (( alias ("cpu_ppc_load_tbu") ));
>  
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index fe93cf0555..70167bae22 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1327,6 +1327,8 @@ uint64_t cpu_ppc_load_atbl (CPUPPCState *env);
>  uint32_t cpu_ppc_load_atbu (CPUPPCState *env);
>  void cpu_ppc_store_atbl (CPUPPCState *env, uint32_t value);
>  void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value);
> +uint64_t cpu_ppc_load_vtb(CPUPPCState *env);
> +void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value);
>  bool ppc_decr_clear_on_delivery(CPUPPCState *env);
>  target_ulong cpu_ppc_load_decr (CPUPPCState *env);
>  void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
> diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> index 69cbf7922f..3701bcbf1b 100644
> --- a/target/ppc/helper.h
> +++ b/target/ppc/helper.h
> @@ -680,6 +680,7 @@ DEF_HELPER_FLAGS_1(load_tbl, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_1(load_tbu, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_1(load_atbl, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_1(load_atbu, TCG_CALL_NO_RWG, tl, env)
> +DEF_HELPER_FLAGS_1(load_vtb, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_1(load_601_rtcl, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
>  #if !defined(CONFIG_USER_ONLY)
> @@ -700,6 +701,7 @@ DEF_HELPER_FLAGS_1(load_decr, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_2(store_decr, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_FLAGS_1(load_hdecr, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_2(store_hdecr, TCG_CALL_NO_RWG, void, env, tl)
> +DEF_HELPER_FLAGS_2(store_vtb, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_2(store_hid0_601, void, env, tl)
>  DEF_HELPER_3(store_403_pbr, void, env, i32, tl)
>  DEF_HELPER_FLAGS_1(load_40x_pit, TCG_CALL_NO_RWG, tl, env)
> diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
> index 73363e08ae..8c3c2fe67c 100644
> --- a/target/ppc/timebase_helper.c
> +++ b/target/ppc/timebase_helper.c
> @@ -45,6 +45,11 @@ target_ulong helper_load_atbu(CPUPPCState *env)
>      return cpu_ppc_load_atbu(env);
>  }
>  
> +target_ulong helper_load_vtb(CPUPPCState *env)
> +{
> +    return cpu_ppc_load_vtb(env);
> +}
> +
>  #if defined(TARGET_PPC64) && !defined(CONFIG_USER_ONLY)
>  target_ulong helper_load_purr(CPUPPCState *env)
>  {
> @@ -113,6 +118,11 @@ void helper_store_hdecr(CPUPPCState *env, target_ulong val)
>      cpu_ppc_store_hdecr(env, val);
>  }
>  
> +void helper_store_vtb(CPUPPCState *env, target_ulong val)
> +{
> +    cpu_ppc_store_vtb(env, val);
> +}
> +
>  target_ulong helper_load_40x_pit(CPUPPCState *env)
>  {
>      return load_40x_pit(env);
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index 0bd555eb19..e3f941800b 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -310,6 +310,16 @@ static void spr_write_hdecr(DisasContext *ctx, int sprn, int gprn)
>      }
>  }
>  
> +static void spr_read_vtb(DisasContext *ctx, int gprn, int sprn)
> +{
> +    gen_helper_load_vtb(cpu_gpr[gprn], cpu_env);
> +}
> +
> +static void spr_write_vtb(DisasContext *ctx, int sprn, int gprn)
> +{
> +    gen_helper_store_vtb(cpu_env, cpu_gpr[gprn]);
> +}
> +
>  #endif
>  #endif
>  
> @@ -8133,10 +8143,11 @@ static void gen_spr_power8_ebb(CPUPPCState *env)
>  /* Virtual Time Base */
>  static void gen_spr_vtb(CPUPPCState *env)
>  {
> -    spr_register_kvm(env, SPR_VTB, "VTB",
> -                 SPR_NOACCESS, SPR_NOACCESS,
> -                 &spr_read_tbl, SPR_NOACCESS,
> -                 KVM_REG_PPC_VTB, 0x00000000);
> +    spr_register_kvm_hv(env, SPR_VTB, "VTB",
> +                        SPR_NOACCESS, SPR_NOACCESS,
> +                        &spr_read_vtb, SPR_NOACCESS,
> +                        &spr_read_vtb, &spr_write_vtb,
> +                        KVM_REG_PPC_VTB, 0x00000000);
>  }
>  
>  static void gen_spr_power8_fscr(CPUPPCState *env)

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 02/13] target/ppc: Work [S]PURR implementation and add HV support
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-06  6:15   ` David Gibson
  2019-05-07  1:28     ` Suraj Jitindar Singh
  -1 siblings, 1 reply; 47+ messages in thread
From: David Gibson @ 2019-05-06  6:15 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 7017 bytes --]

On Fri, May 03, 2019 at 03:53:05PM +1000, Suraj Jitindar Singh wrote:
> The Processor Utilisation of Resources Register (PURR) and Scaled
> Processor Utilisation of Resources Register (SPURR) provide an estimate
> of the resources used by the thread, present on POWER7 and later
> processors.
> 
> Currently the [S]PURR registers simply count at the rate of the
> timebase.
> 
> Preserve this behaviour but rework the implementation to store an offset
> like the timebase rather than doing the calculation manually. Also allow
> hypervisor write access to the register along with the currently
> available read access.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Hm.  How will this affect migration of the PURR and SPURR?

> ---
>  hw/ppc/ppc.c                    | 17 +++++++----------
>  include/hw/ppc/ppc.h            |  3 +--
>  target/ppc/cpu.h                |  1 +
>  target/ppc/helper.h             |  1 +
>  target/ppc/timebase_helper.c    |  5 +++++
>  target/ppc/translate_init.inc.c | 23 +++++++++++++++--------
>  6 files changed, 30 insertions(+), 20 deletions(-)
> 
> diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
> index a57ca64626..b567156f97 100644
> --- a/hw/ppc/ppc.c
> +++ b/hw/ppc/ppc.c
> @@ -819,12 +819,9 @@ target_ulong cpu_ppc_load_hdecr (CPUPPCState *env)
>  uint64_t cpu_ppc_load_purr (CPUPPCState *env)
>  {
>      ppc_tb_t *tb_env = env->tb_env;
> -    uint64_t diff;
>  
> -    diff = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - tb_env->purr_start;
> -
> -    return tb_env->purr_load +
> -        muldiv64(diff, tb_env->tb_freq, NANOSECONDS_PER_SECOND);
> +    return cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> +                          tb_env->purr_offset);
>  }
>  
>  /* When decrementer expires,
> @@ -980,12 +977,12 @@ static void cpu_ppc_hdecr_cb(void *opaque)
>      cpu_ppc_hdecr_excp(cpu);
>  }
>  
> -static void cpu_ppc_store_purr(PowerPCCPU *cpu, uint64_t value)
> +void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value)
>  {
> -    ppc_tb_t *tb_env = cpu->env.tb_env;
> +    ppc_tb_t *tb_env = env->tb_env;
>  
> -    tb_env->purr_load = value;
> -    tb_env->purr_start = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
> +    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> +                     &tb_env->purr_offset, value);
>  }
>  
>  static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
> @@ -1002,7 +999,7 @@ static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
>       */
>      _cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
>      _cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
> -    cpu_ppc_store_purr(cpu, 0x0000000000000000ULL);
> +    cpu_ppc_store_purr(env, 0x0000000000000000ULL);
>  }
>  
>  static void timebase_save(PPCTimebase *tb)
> diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
> index 205150e6b4..b09ffbf300 100644
> --- a/include/hw/ppc/ppc.h
> +++ b/include/hw/ppc/ppc.h
> @@ -32,8 +32,7 @@ struct ppc_tb_t {
>      /* Hypervisor decrementer management */
>      uint64_t hdecr_next;    /* Tick for next hdecr interrupt  */
>      QEMUTimer *hdecr_timer;
> -    uint64_t purr_load;
> -    uint64_t purr_start;
> +    int64_t purr_offset;
>      void *opaque;
>      uint32_t flags;
>  };
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 70167bae22..19b3e1de0e 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1335,6 +1335,7 @@ void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
>  target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
>  void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
>  uint64_t cpu_ppc_load_purr (CPUPPCState *env);
> +void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
>  uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
>  uint32_t cpu_ppc601_load_rtcu (CPUPPCState *env);
>  #if !defined(CONFIG_USER_ONLY)
> diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> index 3701bcbf1b..336e7802fb 100644
> --- a/target/ppc/helper.h
> +++ b/target/ppc/helper.h
> @@ -686,6 +686,7 @@ DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
>  #if !defined(CONFIG_USER_ONLY)
>  #if defined(TARGET_PPC64)
>  DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
> +DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_2(store_ptcr, void, env, tl)
>  #endif
>  DEF_HELPER_2(store_sdr1, void, env, tl)
> diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
> index 8c3c2fe67c..2395295b77 100644
> --- a/target/ppc/timebase_helper.c
> +++ b/target/ppc/timebase_helper.c
> @@ -55,6 +55,11 @@ target_ulong helper_load_purr(CPUPPCState *env)
>  {
>      return (target_ulong)cpu_ppc_load_purr(env);
>  }
> +
> +void helper_store_purr(CPUPPCState *env, target_ulong val)
> +{
> +    cpu_ppc_store_purr(env, val);
> +}
>  #endif
>  
>  target_ulong helper_load_601_rtcl(CPUPPCState *env)
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index e3f941800b..9cd33e79ef 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -285,6 +285,11 @@ static void spr_read_purr(DisasContext *ctx, int gprn, int sprn)
>      gen_helper_load_purr(cpu_gpr[gprn], cpu_env);
>  }
>  
> +static void spr_write_purr(DisasContext *ctx, int sprn, int gprn)
> +{
> +    gen_helper_store_purr(cpu_env, cpu_gpr[gprn]);
> +}
> +
>  /* HDECR */
>  static void spr_read_hdecr(DisasContext *ctx, int gprn, int sprn)
>  {
> @@ -7972,14 +7977,16 @@ static void gen_spr_book3s_purr(CPUPPCState *env)
>  {
>  #if !defined(CONFIG_USER_ONLY)
>      /* PURR & SPURR: Hack - treat these as aliases for the TB for now */
> -    spr_register_kvm(env, SPR_PURR,   "PURR",
> -                     &spr_read_purr, SPR_NOACCESS,
> -                     &spr_read_purr, SPR_NOACCESS,
> -                     KVM_REG_PPC_PURR, 0x00000000);
> -    spr_register_kvm(env, SPR_SPURR,   "SPURR",
> -                     &spr_read_purr, SPR_NOACCESS,
> -                     &spr_read_purr, SPR_NOACCESS,
> -                     KVM_REG_PPC_SPURR, 0x00000000);
> +    spr_register_kvm_hv(env, SPR_PURR,   "PURR",
> +                        &spr_read_purr, SPR_NOACCESS,
> +                        &spr_read_purr, SPR_NOACCESS,
> +                        &spr_read_purr, &spr_write_purr,
> +                        KVM_REG_PPC_PURR, 0x00000000);
> +    spr_register_kvm_hv(env, SPR_SPURR,   "SPURR",
> +                        &spr_read_purr, SPR_NOACCESS,
> +                        &spr_read_purr, SPR_NOACCESS,
> +                        &spr_read_purr, &spr_write_purr,
> +                        KVM_REG_PPC_SPURR, 0x00000000);
>  #endif
>  }
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 03/13] target/ppc: Add SPR ASDR
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-06  6:16   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-06  6:16 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 2229 bytes --]

On Fri, May 03, 2019 at 03:53:06PM +1000, Suraj Jitindar Singh wrote:
> The Access Segment Descriptor Register (ASDR) provides information about
> the storage element when taking a hypervisor storage interrupt. When
> performing nested radix address translation, this is normally the guest
> real address. This register is present on POWER9 processors and later.
> 
> Implement the ADSR, note read and write access is limited to the
> hypervisor.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  target/ppc/cpu.h                | 1 +
>  target/ppc/translate_init.inc.c | 6 ++++++
>  2 files changed, 7 insertions(+)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 19b3e1de0e..8d66265e5a 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1797,6 +1797,7 @@ void ppc_compat_add_property(Object *obj, const char *name,
>  #define SPR_MPC_MD_DBRAM1     (0x32A)
>  #define SPR_RCPU_L2U_RA3      (0x32B)
>  #define SPR_TAR               (0x32F)
> +#define SPR_ASDR              (0x330)
>  #define SPR_IC                (0x350)
>  #define SPR_VTB               (0x351)
>  #define SPR_MMCRC             (0x353)
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index 9cd33e79ef..a0cae58e19 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -8243,6 +8243,12 @@ static void gen_spr_power9_mmu(CPUPPCState *env)
>                          SPR_NOACCESS, SPR_NOACCESS,
>                          &spr_read_generic, &spr_write_ptcr,
>                          KVM_REG_PPC_PTCR, 0x00000000);
> +    /* Address Segment Descriptor Register */
> +    spr_register_hv(env, SPR_ASDR, "ASDR",
> +                    SPR_NOACCESS, SPR_NOACCESS,
> +                    SPR_NOACCESS, SPR_NOACCESS,
> +                    &spr_read_generic, &spr_write_generic,
> +                    0x0000000000000000);
>  #endif
>  }
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 04/13] target/ppc: Add SPR TBU40
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-06  6:17   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-06  6:17 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 5960 bytes --]

On Fri, May 03, 2019 at 03:53:07PM +1000, Suraj Jitindar Singh wrote:
> The spr TBU40 is used to set the upper 40 bits of the timebase
> register, present on POWER5+ and later processors.
> 
> This register can only be written by the hypervisor, and cannot be read.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/ppc/ppc.c                    | 13 +++++++++++++
>  target/ppc/cpu.h                |  1 +
>  target/ppc/helper.h             |  1 +
>  target/ppc/timebase_helper.c    |  5 +++++
>  target/ppc/translate_init.inc.c | 19 +++++++++++++++++++
>  5 files changed, 39 insertions(+)
> 
> diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
> index b567156f97..b618c6f615 100644
> --- a/hw/ppc/ppc.c
> +++ b/hw/ppc/ppc.c
> @@ -710,6 +710,19 @@ void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
>                       &tb_env->vtb_offset, value);
>  }
>  
> +void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value)
> +{
> +    ppc_tb_t *tb_env = env->tb_env;
> +    uint64_t tb;
> +
> +    tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> +                        tb_env->tb_offset);
> +    tb &= 0xFFFFFFUL;
> +    tb |= (value & ~0xFFFFFFUL);
> +    cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> +                     &tb_env->tb_offset, tb);
> +}
> +
>  static void cpu_ppc_tb_stop (CPUPPCState *env)
>  {
>      ppc_tb_t *tb_env = env->tb_env;
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 8d66265e5a..e324064111 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1334,6 +1334,7 @@ target_ulong cpu_ppc_load_decr (CPUPPCState *env);
>  void cpu_ppc_store_decr (CPUPPCState *env, target_ulong value);
>  target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
>  void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
> +void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value);
>  uint64_t cpu_ppc_load_purr (CPUPPCState *env);
>  void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
>  uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
> diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> index 336e7802fb..6aee195528 100644
> --- a/target/ppc/helper.h
> +++ b/target/ppc/helper.h
> @@ -703,6 +703,7 @@ DEF_HELPER_FLAGS_2(store_decr, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_FLAGS_1(load_hdecr, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_2(store_hdecr, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_FLAGS_2(store_vtb, TCG_CALL_NO_RWG, void, env, tl)
> +DEF_HELPER_FLAGS_2(store_tbu40, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_2(store_hid0_601, void, env, tl)
>  DEF_HELPER_3(store_403_pbr, void, env, i32, tl)
>  DEF_HELPER_FLAGS_1(load_40x_pit, TCG_CALL_NO_RWG, tl, env)
> diff --git a/target/ppc/timebase_helper.c b/target/ppc/timebase_helper.c
> index 2395295b77..703bd9ed18 100644
> --- a/target/ppc/timebase_helper.c
> +++ b/target/ppc/timebase_helper.c
> @@ -128,6 +128,11 @@ void helper_store_vtb(CPUPPCState *env, target_ulong val)
>      cpu_ppc_store_vtb(env, val);
>  }
>  
> +void helper_store_tbu40(CPUPPCState *env, target_ulong val)
> +{
> +    cpu_ppc_store_tbu40(env, val);
> +}
> +
>  target_ulong helper_load_40x_pit(CPUPPCState *env)
>  {
>      return load_40x_pit(env);
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index a0cae58e19..8e287066e5 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -325,6 +325,11 @@ static void spr_write_vtb(DisasContext *ctx, int sprn, int gprn)
>      gen_helper_store_vtb(cpu_env, cpu_gpr[gprn]);
>  }
>  
> +static void spr_write_tbu40(DisasContext *ctx, int sprn, int gprn)
> +{
> +    gen_helper_store_tbu40(cpu_env, cpu_gpr[gprn]);
> +}
> +
>  #endif
>  #endif
>  
> @@ -7812,6 +7817,16 @@ static void gen_spr_power5p_ear(CPUPPCState *env)
>                   0x00000000);
>  }
>  
> +static void gen_spr_power5p_tb(CPUPPCState *env)
> +{
> +    /* TBU40 (High 40 bits of the Timebase register */
> +    spr_register_hv(env, SPR_TBU40, "TBU40",
> +                    SPR_NOACCESS, SPR_NOACCESS,
> +                    SPR_NOACCESS, SPR_NOACCESS,
> +                    SPR_NOACCESS, &spr_write_tbu40,
> +                    0x00000000);
> +}
> +
>  #if !defined(CONFIG_USER_ONLY)
>  static void spr_write_hmer(DisasContext *ctx, int sprn, int gprn)
>  {
> @@ -8352,6 +8367,7 @@ static void init_proc_power5plus(CPUPPCState *env)
>      gen_spr_power5p_common(env);
>      gen_spr_power5p_lpar(env);
>      gen_spr_power5p_ear(env);
> +    gen_spr_power5p_tb(env);
>  
>      /* env variables */
>      env->dcache_line_size = 128;
> @@ -8464,6 +8480,7 @@ static void init_proc_POWER7(CPUPPCState *env)
>      gen_spr_power5p_common(env);
>      gen_spr_power5p_lpar(env);
>      gen_spr_power5p_ear(env);
> +    gen_spr_power5p_tb(env);
>      gen_spr_power6_common(env);
>      gen_spr_power6_dbg(env);
>      gen_spr_power7_book4(env);
> @@ -8605,6 +8622,7 @@ static void init_proc_POWER8(CPUPPCState *env)
>      gen_spr_power5p_common(env);
>      gen_spr_power5p_lpar(env);
>      gen_spr_power5p_ear(env);
> +    gen_spr_power5p_tb(env);
>      gen_spr_power6_common(env);
>      gen_spr_power6_dbg(env);
>      gen_spr_power8_tce_address_control(env);
> @@ -8793,6 +8811,7 @@ static void init_proc_POWER9(CPUPPCState *env)
>      gen_spr_power5p_common(env);
>      gen_spr_power5p_lpar(env);
>      gen_spr_power5p_ear(env);
> +    gen_spr_power5p_tb(env);
>      gen_spr_power6_common(env);
>      gen_spr_power6_dbg(env);
>      gen_spr_power8_tce_address_control(env);

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG
  2019-05-03  5:53 ` Suraj Jitindar Singh
                   ` (14 preceding siblings ...)
  (?)
@ 2019-05-06  6:20 ` David Gibson
  2019-05-06 23:45   ` Suraj Jitindar Singh
  -1 siblings, 1 reply; 47+ messages in thread
From: David Gibson @ 2019-05-06  6:20 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 3359 bytes --]

On Fri, May 03, 2019 at 03:53:03PM +1000, Suraj Jitindar Singh wrote:
> This patch series adds the necessary parts so that a tcg guest is able to use
> kvm facilities. That is a tcg guest can boot its own kvm guests.

The topic line is a bit hard to parse.  IIUC there are basically two
things in this series:

1) Implement / fix TCG emulation of TCG hypervisor facilities, so that
   a TCG powernv machine can use them to run KVM guests.

2) Have the pseries machine under TCG implement the paravirtualized
   interfaces to allow nested virtualizationm therefore allowing TCG
   pseries machines to run KVM guests

Is that right?

> The main requirements for this were a few registers and instructions as well as
> some hcalls and the addition of partition scoped translation in the radix mmu
> emulation.
> 
> This can be used to boot a kvm guest under a pseries tcg guest:
> Use power9_v2.2 cpu and add -machine cap-nested-hv=on for the first guest.
> Then inside that guest boot a kvm guest as normal.
> This takes advantage of the new hcalls with qemu emulating them as a normal
> hypervisor would on a real machine.
> 
> This can also be used to boot a kvm guest under a powernv tcg guest:
> Use any power9 cpu type.
> This takes advantage of the new hv register access added.
> Note that for powernv there is no xive interrupt excalation for KVM which means
> that while the guest will boot, it won't receive any interrupts.
> 
> Suraj Jitindar Singh (13):
>   target/ppc: Implement the VTB for HV access
>   target/ppc: Work [S]PURR implementation and add HV support
>   target/ppc: Add SPR ASDR
>   target/ppc: Add SPR TBU40
>   target/ppc: Add privileged message send facilities
>   target/ppc: Enforce that the root page directory size must be at least
>     5
>   target/ppc: Handle partition scoped radix tree translation
>   target/ppc: Implement hcall H_SET_PARTITION_TABLE
>   target/ppc: Implement hcall H_ENTER_NESTED
>   target/ppc: Implement hcall H_TLB_INVALIDATE
>   target/ppc: Implement hcall H_COPY_TOFROM_GUEST
>   target/ppc: Introduce POWER9 DD2.2 cpu type
>   target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
> 
>  hw/ppc/ppc.c                    |  46 ++++-
>  hw/ppc/spapr_caps.c             |  22 ++-
>  hw/ppc/spapr_cpu_core.c         |   1 +
>  hw/ppc/spapr_hcall.c            | 409 +++++++++++++++++++++++++++++++++++++++
>  include/hw/ppc/ppc.h            |   4 +-
>  include/hw/ppc/spapr.h          |   7 +-
>  linux-user/ppc/cpu_loop.c       |   5 +
>  target/ppc/cpu-models.c         |   2 +
>  target/ppc/cpu-models.h         |   1 +
>  target/ppc/cpu.h                |  70 +++++++
>  target/ppc/excp_helper.c        |  79 +++++++-
>  target/ppc/helper.h             |   9 +
>  target/ppc/misc_helper.c        |  46 +++++
>  target/ppc/mmu-radix64.c        | 412 ++++++++++++++++++++++++++++------------
>  target/ppc/mmu-radix64.h        |   4 +
>  target/ppc/timebase_helper.c    |  20 ++
>  target/ppc/translate.c          |  28 +++
>  target/ppc/translate_init.inc.c | 107 +++++++++--
>  18 files changed, 1115 insertions(+), 157 deletions(-)
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG
  2019-05-06  6:20 ` David Gibson
@ 2019-05-06 23:45   ` Suraj Jitindar Singh
  0 siblings, 0 replies; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-06 23:45 UTC (permalink / raw)
  To: David Gibson; +Cc: groug, qemu-ppc, qemu-devel, clg

On Mon, 2019-05-06 at 16:20 +1000, David Gibson wrote:
> On Fri, May 03, 2019 at 03:53:03PM +1000, Suraj Jitindar Singh wrote:
> > This patch series adds the necessary parts so that a tcg guest is
> > able to use
> > kvm facilities. That is a tcg guest can boot its own kvm guests.
> 
> The topic line is a bit hard to parse.  IIUC there are basically two
> things in this series:
> 
> 1) Implement / fix TCG emulation of TCG hypervisor facilities, so
> that
>    a TCG powernv machine can use them to run KVM guests.
> 
> 2) Have the pseries machine under TCG implement the paravirtualized
>    interfaces to allow nested virtualizationm therefore allowing TCG
>    pseries machines to run KVM guests
> 
> Is that right?

That is correct.

Patches 1-7 achieve 1) TCG emulation of hypervisor facilities

Patches 8-13 achieve 2) emulation of paravirtualised KVM for pseries
guest

> 
> > The main requirements for this were a few registers and
> > instructions as well as
> > some hcalls and the addition of partition scoped translation in the
> > radix mmu
> > emulation.
> > 
> > This can be used to boot a kvm guest under a pseries tcg guest:
> > Use power9_v2.2 cpu and add -machine cap-nested-hv=on for the first
> > guest.
> > Then inside that guest boot a kvm guest as normal.
> > This takes advantage of the new hcalls with qemu emulating them as
> > a normal
> > hypervisor would on a real machine.
> > 
> > This can also be used to boot a kvm guest under a powernv tcg
> > guest:
> > Use any power9 cpu type.
> > This takes advantage of the new hv register access added.
> > Note that for powernv there is no xive interrupt excalation for KVM
> > which means
> > that while the guest will boot, it won't receive any interrupts.
> > 
> > Suraj Jitindar Singh (13):
> >   target/ppc: Implement the VTB for HV access
> >   target/ppc: Work [S]PURR implementation and add HV support
> >   target/ppc: Add SPR ASDR
> >   target/ppc: Add SPR TBU40
> >   target/ppc: Add privileged message send facilities
> >   target/ppc: Enforce that the root page directory size must be at
> > least
> >     5
> >   target/ppc: Handle partition scoped radix tree translation
> >   target/ppc: Implement hcall H_SET_PARTITION_TABLE
> >   target/ppc: Implement hcall H_ENTER_NESTED
> >   target/ppc: Implement hcall H_TLB_INVALIDATE
> >   target/ppc: Implement hcall H_COPY_TOFROM_GUEST
> >   target/ppc: Introduce POWER9 DD2.2 cpu type
> >   target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
> > 
> >  hw/ppc/ppc.c                    |  46 ++++-
> >  hw/ppc/spapr_caps.c             |  22 ++-
> >  hw/ppc/spapr_cpu_core.c         |   1 +
> >  hw/ppc/spapr_hcall.c            | 409
> > +++++++++++++++++++++++++++++++++++++++
> >  include/hw/ppc/ppc.h            |   4 +-
> >  include/hw/ppc/spapr.h          |   7 +-
> >  linux-user/ppc/cpu_loop.c       |   5 +
> >  target/ppc/cpu-models.c         |   2 +
> >  target/ppc/cpu-models.h         |   1 +
> >  target/ppc/cpu.h                |  70 +++++++
> >  target/ppc/excp_helper.c        |  79 +++++++-
> >  target/ppc/helper.h             |   9 +
> >  target/ppc/misc_helper.c        |  46 +++++
> >  target/ppc/mmu-radix64.c        | 412
> > ++++++++++++++++++++++++++++------------
> >  target/ppc/mmu-radix64.h        |   4 +
> >  target/ppc/timebase_helper.c    |  20 ++
> >  target/ppc/translate.c          |  28 +++
> >  target/ppc/translate_init.inc.c | 107 +++++++++--
> >  18 files changed, 1115 insertions(+), 157 deletions(-)
> > 
> 
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 02/13] target/ppc: Work [S]PURR implementation and add HV support
  2019-05-06  6:15   ` David Gibson
@ 2019-05-07  1:28     ` Suraj Jitindar Singh
  2019-05-09  6:45       ` David Gibson
  0 siblings, 1 reply; 47+ messages in thread
From: Suraj Jitindar Singh @ 2019-05-07  1:28 UTC (permalink / raw)
  To: David Gibson; +Cc: groug, qemu-ppc, qemu-devel, clg

On Mon, 2019-05-06 at 16:15 +1000, David Gibson wrote:
> On Fri, May 03, 2019 at 03:53:05PM +1000, Suraj Jitindar Singh wrote:
> > The Processor Utilisation of Resources Register (PURR) and Scaled
> > Processor Utilisation of Resources Register (SPURR) provide an
> > estimate
> > of the resources used by the thread, present on POWER7 and later
> > processors.
> > 
> > Currently the [S]PURR registers simply count at the rate of the
> > timebase.
> > 
> > Preserve this behaviour but rework the implementation to store an
> > offset
> > like the timebase rather than doing the calculation manually. Also
> > allow
> > hypervisor write access to the register along with the currently
> > available read access.
> > 
> > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> 
> Hm.  How will this affect migration of the PURR and SPURR?

So as it turns out, the PURR isn't acutually migrated. We rely on the
fact that the QEMU_CLOCK_VIRTUAL is migrated and that the PURR can
never change value. Since it just counts at the same rate as the time
base we get away with it.

For this to work we will need to add PURR, and VTB for the later patch
which adds it to the migration stream. I suggest me just migrate by
value meaning the internal representation can infact change in future
without breaking migration.

What this means is that this patch changing the internal representation
if fine given migration is broken anyway. When I resend this series
I'll add the purr and vtb to the migration stream.

> 
> > ---
> >  /ppc/ppc.c                    | 17 +++++++----------
> >  include/hw/ppc/ppc.h            |  3 +--
> >  target/ppc/cpu.h                |  1 +
> >  target/ppc/helper.h             |  1 +
> >  target/ppc/timebase_helper.c    |  5 +++++
> >  target/ppc/translate_init.inc.c | 23 +++++++++++++++--------
> >  6 files changed, 30 insertions(+), 20 deletions(-)
> > 
> > diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
> > index a57ca64626..b567156f97 100644
> > --- a/hw/ppc/ppc.c
> > +++ b/hw/ppc/ppc.c
> > @@ -819,12 +819,9 @@ target_ulong cpu_ppc_load_hdecr (CPUPPCState
> > *env)
> >  uint64_t cpu_ppc_load_purr (CPUPPCState *env)
> >  {
> >      ppc_tb_t *tb_env = env->tb_env;
> > -    uint64_t diff;
> >  
> > -    diff = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - tb_env-
> > >purr_start;
> > -
> > -    return tb_env->purr_load +
> > -        muldiv64(diff, tb_env->tb_freq, NANOSECONDS_PER_SECOND);
> > +    return cpu_ppc_get_tb(tb_env,
> > qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> > +                          tb_env->purr_offset);
> >  }
> >  
> >  /* When decrementer expires,
> > @@ -980,12 +977,12 @@ static void cpu_ppc_hdecr_cb(void *opaque)
> >      cpu_ppc_hdecr_excp(cpu);
> >  }
> >  
> > -static void cpu_ppc_store_purr(PowerPCCPU *cpu, uint64_t value)
> > +void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value)
> >  {
> > -    ppc_tb_t *tb_env = cpu->env.tb_env;
> > +    ppc_tb_t *tb_env = env->tb_env;
> >  
> > -    tb_env->purr_load = value;
> > -    tb_env->purr_start = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
> > +    cpu_ppc_store_tb(tb_env,
> > qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
> > +                     &tb_env->purr_offset, value);
> >  }
> >  
> >  static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
> > @@ -1002,7 +999,7 @@ static void cpu_ppc_set_tb_clk (void *opaque,
> > uint32_t freq)
> >       */
> >      _cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
> >      _cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 32);
> > -    cpu_ppc_store_purr(cpu, 0x0000000000000000ULL);
> > +    cpu_ppc_store_purr(env, 0x0000000000000000ULL);
> >  }
> >  
> >  static void timebase_save(PPCTimebase *tb)
> > diff --git a/include/hw/ppc/ppc.h b/include/hw/ppc/ppc.h
> > index 205150e6b4..b09ffbf300 100644
> > --- a/include/hw/ppc/ppc.h
> > +++ b/include/hw/ppc/ppc.h
> > @@ -32,8 +32,7 @@ struct ppc_tb_t {
> >      /* Hypervisor decrementer management */
> >      uint64_t hdecr_next;    /* Tick for next hdecr interrupt  */
> >      QEMUTimer *hdecr_timer;
> > -    uint64_t purr_load;
> > -    uint64_t purr_start;
> > +    int64_t purr_offset;
> >      void *opaque;
> >      uint32_t flags;
> >  };
> > diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> > index 70167bae22..19b3e1de0e 100644
> > --- a/target/ppc/cpu.h
> > +++ b/target/ppc/cpu.h
> > @@ -1335,6 +1335,7 @@ void cpu_ppc_store_decr (CPUPPCState *env,
> > target_ulong value);
> >  target_ulong cpu_ppc_load_hdecr (CPUPPCState *env);
> >  void cpu_ppc_store_hdecr (CPUPPCState *env, target_ulong value);
> >  uint64_t cpu_ppc_load_purr (CPUPPCState *env);
> > +void cpu_ppc_store_purr(CPUPPCState *env, uint64_t value);
> >  uint32_t cpu_ppc601_load_rtcl (CPUPPCState *env);
> >  uint32_t cpu_ppc601_load_rtcu (CPUPPCState *env);
> >  #if !defined(CONFIG_USER_ONLY)
> > diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> > index 3701bcbf1b..336e7802fb 100644
> > --- a/target/ppc/helper.h
> > +++ b/target/ppc/helper.h
> > @@ -686,6 +686,7 @@ DEF_HELPER_FLAGS_1(load_601_rtcu,
> > TCG_CALL_NO_RWG, tl, env)
> >  #if !defined(CONFIG_USER_ONLY)
> >  #if defined(TARGET_PPC64)
> >  DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
> > +DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
> >  DEF_HELPER_2(store_ptcr, void, env, tl)
> >  #endif
> >  DEF_HELPER_2(store_sdr1, void, env, tl)
> > diff --git a/target/ppc/timebase_helper.c
> > b/target/ppc/timebase_helper.c
> > index 8c3c2fe67c..2395295b77 100644
> > --- a/target/ppc/timebase_helper.c
> > +++ b/target/ppc/timebase_helper.c
> > @@ -55,6 +55,11 @@ target_ulong helper_load_purr(CPUPPCState *env)
> >  {
> >      return (target_ulong)cpu_ppc_load_purr(env);
> >  }
> > +
> > +void helper_store_purr(CPUPPCState *env, target_ulong val)
> > +{
> > +    cpu_ppc_store_purr(env, val);
> > +}
> >  #endif
> >  
> >  target_ulong helper_load_601_rtcl(CPUPPCState *env)
> > diff --git a/target/ppc/translate_init.inc.c
> > b/target/ppc/translate_init.inc.c
> > index e3f941800b..9cd33e79ef 100644
> > --- a/target/ppc/translate_init.inc.c
> > +++ b/target/ppc/translate_init.inc.c
> > @@ -285,6 +285,11 @@ static void spr_read_purr(DisasContext *ctx,
> > int gprn, int sprn)
> >      gen_helper_load_purr(cpu_gpr[gprn], cpu_env);
> >  }
> >  
> > +static void spr_write_purr(DisasContext *ctx, int sprn, int gprn)
> > +{
> > +    gen_helper_store_purr(cpu_env, cpu_gpr[gprn]);
> > +}
> > +
> >  /* HDECR */
> >  static void spr_read_hdecr(DisasContext *ctx, int gprn, int sprn)
> >  {
> > @@ -7972,14 +7977,16 @@ static void gen_spr_book3s_purr(CPUPPCState
> > *env)
> >  {
> >  #if !defined(CONFIG_USER_ONLY)
> >      /* PURR & SPURR: Hack - treat these as aliases for the TB for
> > now */
> > -    spr_register_kvm(env, SPR_PURR,   "PURR",
> > -                     &spr_read_purr, SPR_NOACCESS,
> > -                     &spr_read_purr, SPR_NOACCESS,
> > -                     KVM_REG_PPC_PURR, 0x00000000);
> > -    spr_register_kvm(env, SPR_SPURR,   "SPURR",
> > -                     &spr_read_purr, SPR_NOACCESS,
> > -                     &spr_read_purr, SPR_NOACCESS,
> > -                     KVM_REG_PPC_SPURR, 0x00000000);
> > +    spr_register_kvm_hv(env, SPR_PURR,   "PURR",
> > +                        &spr_read_purr, SPR_NOACCESS,
> > +                        &spr_read_purr, SPR_NOACCESS,
> > +                        &spr_read_purr, &spr_write_purr,
> > +                        KVM_REG_PPC_PURR, 0x00000000);
> > +    spr_register_kvm_hv(env, SPR_SPURR,   "SPURR",
> > +                        &spr_read_purr, SPR_NOACCESS,
> > +                        &spr_read_purr, SPR_NOACCESS,
> > +                        &spr_read_purr, &spr_write_purr,
> > +                        KVM_REG_PPC_SPURR, 0x00000000);
> >  #endif
> >  }
> >  
> 
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 02/13] target/ppc: Work [S]PURR implementation and add HV support
  2019-05-07  1:28     ` Suraj Jitindar Singh
@ 2019-05-09  6:45       ` David Gibson
  0 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-09  6:45 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 2355 bytes --]

On Tue, May 07, 2019 at 11:28:04AM +1000, Suraj Jitindar Singh wrote:
> On Mon, 2019-05-06 at 16:15 +1000, David Gibson wrote:
> > On Fri, May 03, 2019 at 03:53:05PM +1000, Suraj Jitindar Singh wrote:
> > > The Processor Utilisation of Resources Register (PURR) and Scaled
> > > Processor Utilisation of Resources Register (SPURR) provide an
> > > estimate
> > > of the resources used by the thread, present on POWER7 and later
> > > processors.
> > > 
> > > Currently the [S]PURR registers simply count at the rate of the
> > > timebase.
> > > 
> > > Preserve this behaviour but rework the implementation to store an
> > > offset
> > > like the timebase rather than doing the calculation manually. Also
> > > allow
> > > hypervisor write access to the register along with the currently
> > > available read access.
> > > 
> > > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> > 
> > Hm.  How will this affect migration of the PURR and SPURR?
> 
> So as it turns out, the PURR isn't acutually migrated. We rely on the
> fact that the QEMU_CLOCK_VIRTUAL is migrated and that the PURR can
> never change value. Since it just counts at the same rate as the time
> base we get away with it.

Ah, ok.

> For this to work we will need to add PURR, and VTB for the later patch
> which adds it to the migration stream. I suggest me just migrate by
> value meaning the internal representation can infact change in future
> without breaking migration.

Yes, that sounds good, and we even already have a spot for it in the
sprs array.

At first I was thinking we'd need to fiddle about with adjusting it
afterwards to account for the migration downtime (as we do for tb),
but then I realised that it probably makes more sense *not* to count
the migration downtime against purr and spurr, which makes it even
easier.
> 
> What this means is that this patch changing the internal representation
> if fine given migration is broken anyway. When I resend this series
> I'll add the purr and vtb to the migration stream.

Ok, sounds good, therefore:

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 05/13] target/ppc: Add privileged message send facilities
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  2:09   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  2:09 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 16073 bytes --]

On Fri, May 03, 2019 at 03:53:08PM +1000, Suraj Jitindar Singh wrote:
> Privileged message send facilities exist on POWER8 processors and later
> and include a register and instructions which can be used to generate,
> observe/modify the state of and clear privileged doorbell exceptions as
> described below.
> 
> The Directed Privileged Doorbell Exception State (DPDES) register
> reflects the state of pending privileged doorbell exceptions and can
> also be used to modify that state. The register can be used to read and
> modify the state of privileged doorbell exceptions for all threads of a
> subprocessor and thus is a shared facility for that subprocessor. The
> register can be read/written by the hypervisor and read by the
> supervisor if enabled in the HFSCR, otherwise a hypervisor facility
> unavailable exception is generated.
> 
> The privileged message send and clear instructions (msgsndp & msgclrp)
> are used to generate and clear the presence of a directed privileged
> doorbell exception, respectively. The msgsndp instruction can be used to
> target any thread of the current subprocessor, msgclrp acts on the
> thread issuing the instruction. These instructions are privileged, but
> will generate a hypervisor facility unavailable exception if not enabled
> in the HFSCR and executed in privileged non-hypervisor state.
> 
> Add and implement this register and instructions by reading or modifying the
> pending interrupt state of the cpu.
> 
> Note that TCG only supports one thread per core and so we only need to
> worry about the cpu making the access.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

I think this would be clearer if you put the framework for the
facility unavailable exception into a separate patch.  Apart from
that, LGTM.

> ---
>  target/ppc/cpu.h                |  7 +++++
>  target/ppc/excp_helper.c        | 63 +++++++++++++++++++++++++++++++++++++----
>  target/ppc/helper.h             |  5 ++++
>  target/ppc/misc_helper.c        | 46 ++++++++++++++++++++++++++++++
>  target/ppc/translate.c          | 28 ++++++++++++++++++
>  target/ppc/translate_init.inc.c | 40 ++++++++++++++++++++++++++
>  6 files changed, 184 insertions(+), 5 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index e324064111..1d2a088391 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -425,6 +425,10 @@ typedef struct ppc_v3_pate_t {
>  #define PSSCR_ESL         PPC_BIT(42) /* Enable State Loss */
>  #define PSSCR_EC          PPC_BIT(43) /* Exit Criterion */
>  
> +/* HFSCR bits */
> +#define HFSCR_MSGSNDP     PPC_BIT(53) /* Privileged Message Send Facilities */
> +#define HFSCR_IC_MSGSNDP  0xA
> +
>  #define msr_sf   ((env->msr >> MSR_SF)   & 1)
>  #define msr_isf  ((env->msr >> MSR_ISF)  & 1)
>  #define msr_shv  ((env->msr >> MSR_SHV)  & 1)
> @@ -1355,6 +1359,8 @@ void cpu_ppc_set_vhyp(PowerPCCPU *cpu, PPCVirtualHypervisor *vhyp);
>  #endif
>  
>  void store_fpscr(CPUPPCState *env, uint64_t arg, uint32_t mask);
> +void gen_hfscr_facility_check(DisasContext *ctx, int facility_sprn, int bit,
> +                              int sprn, int cause);
>  
>  static inline uint64_t ppc_dump_gpr(CPUPPCState *env, int gprn)
>  {
> @@ -1501,6 +1507,7 @@ void ppc_compat_add_property(Object *obj, const char *name,
>  #define SPR_MPC_ICTRL         (0x09E)
>  #define SPR_MPC_BAR           (0x09F)
>  #define SPR_PSPB              (0x09F)
> +#define SPR_DPDES             (0x0B0)
>  #define SPR_DAWR              (0x0B4)
>  #define SPR_RPR               (0x0BA)
>  #define SPR_CIABR             (0x0BB)
> diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
> index beafcf1ebd..7a4da7bdba 100644
> --- a/target/ppc/excp_helper.c
> +++ b/target/ppc/excp_helper.c
> @@ -461,6 +461,13 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
>          env->spr[SPR_FSCR] |= ((target_ulong)env->error_code << 56);
>  #endif
>          break;
> +    case POWERPC_EXCP_HV_FU:     /* Hypervisor Facility Unavailable Exception */
> +        env->spr[SPR_HFSCR] |= ((target_ulong)env->error_code << FSCR_IC_POS);
> +        srr0 = SPR_HSRR0;
> +        srr1 = SPR_HSRR1;
> +        new_msr |= (target_ulong)MSR_HVB;
> +        new_msr |= env->msr & ((target_ulong)1 << MSR_RI);
> +        break;
>      case POWERPC_EXCP_PIT:       /* Programmable interval timer interrupt    */
>          LOG_EXCP("PIT exception\n");
>          break;
> @@ -884,7 +891,11 @@ static void ppc_hw_interrupt(CPUPPCState *env)
>          }
>          if (env->pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL)) {
>              env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
> -            powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DOORI);
> +            if (env->insns_flags & PPC_SEGMENT_64B) {
> +                powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_SDOOR);
> +            } else {
> +                powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DOORI);
> +            }
>              return;
>          }
>          if (env->pending_interrupts & (1 << PPC_INTERRUPT_HDOORBELL)) {
> @@ -1202,19 +1213,26 @@ void helper_msgsnd(target_ulong rb)
>  }
>  
>  /* Server Processor Control */
> -static int book3s_dbell2irq(target_ulong rb)
> +static int book3s_dbell2irq(target_ulong rb, bool hv_dbell)
>  {
>      int msg = rb & DBELL_TYPE_MASK;
>  
>      /* A Directed Hypervisor Doorbell message is sent only if the
>       * message type is 5. All other types are reserved and the
>       * instruction is a no-op */
> -    return msg == DBELL_TYPE_DBELL_SERVER ? PPC_INTERRUPT_HDOORBELL : -1;
> +    if (msg == DBELL_TYPE_DBELL_SERVER) {
> +        if (hv_dbell)
> +            return PPC_INTERRUPT_HDOORBELL;
> +        else
> +            return PPC_INTERRUPT_DOORBELL;
> +    }
> +
> +    return -1;
>  }
>  
>  void helper_book3s_msgclr(CPUPPCState *env, target_ulong rb)
>  {
> -    int irq = book3s_dbell2irq(rb);
> +    int irq = book3s_dbell2irq(rb, 1);
>  
>      if (irq < 0) {
>          return;
> @@ -1225,7 +1243,42 @@ void helper_book3s_msgclr(CPUPPCState *env, target_ulong rb)
>  
>  void helper_book3s_msgsnd(target_ulong rb)
>  {
> -    int irq = book3s_dbell2irq(rb);
> +    int irq = book3s_dbell2irq(rb, 1);
> +    int pir = rb & DBELL_PROCIDTAG_MASK;
> +    CPUState *cs;
> +
> +    if (irq < 0) {
> +        return;
> +    }
> +
> +    qemu_mutex_lock_iothread();
> +    CPU_FOREACH(cs) {
> +        PowerPCCPU *cpu = POWERPC_CPU(cs);
> +        CPUPPCState *cenv = &cpu->env;
> +
> +        /* TODO: broadcast message to all threads of the same  processor */
> +        if (cenv->spr_cb[SPR_PIR].default_value == pir) {
> +            cenv->pending_interrupts |= 1 << irq;
> +            cpu_interrupt(cs, CPU_INTERRUPT_HARD);
> +        }
> +    }
> +    qemu_mutex_unlock_iothread();
> +}
> +
> +void helper_book3s_msgclrp(CPUPPCState *env, target_ulong rb)
> +{
> +    int irq = book3s_dbell2irq(rb, 0);
> +
> +    if (irq < 0) {
> +        return;
> +    }
> +
> +    env->pending_interrupts &= ~(1 << irq);
> +}
> +
> +void helper_book3s_msgsndp(target_ulong rb)
> +{
> +    int irq = book3s_dbell2irq(rb, 0);
>      int pir = rb & DBELL_PROCIDTAG_MASK;
>      CPUState *cs;
>  
> diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> index 6aee195528..040f59d1af 100644
> --- a/target/ppc/helper.h
> +++ b/target/ppc/helper.h
> @@ -657,6 +657,8 @@ DEF_HELPER_1(msgsnd, void, tl)
>  DEF_HELPER_2(msgclr, void, env, tl)
>  DEF_HELPER_1(book3s_msgsnd, void, tl)
>  DEF_HELPER_2(book3s_msgclr, void, env, tl)
> +DEF_HELPER_1(book3s_msgsndp, void, tl)
> +DEF_HELPER_2(book3s_msgclrp, void, env, tl)
>  #endif
>  
>  DEF_HELPER_4(dlmzb, tl, env, tl, tl, i32)
> @@ -674,6 +676,7 @@ DEF_HELPER_3(store_dcr, void, env, tl, tl)
>  
>  DEF_HELPER_2(load_dump_spr, void, env, i32)
>  DEF_HELPER_2(store_dump_spr, void, env, i32)
> +DEF_HELPER_4(hfscr_facility_check, void, env, i32, i32, i32)
>  DEF_HELPER_4(fscr_facility_check, void, env, i32, i32, i32)
>  DEF_HELPER_4(msr_facility_check, void, env, i32, i32, i32)
>  DEF_HELPER_FLAGS_1(load_tbl, TCG_CALL_NO_RWG, tl, env)
> @@ -688,6 +691,8 @@ DEF_HELPER_FLAGS_1(load_601_rtcu, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_1(load_purr, TCG_CALL_NO_RWG, tl, env)
>  DEF_HELPER_FLAGS_2(store_purr, TCG_CALL_NO_RWG, void, env, tl)
>  DEF_HELPER_2(store_ptcr, void, env, tl)
> +DEF_HELPER_FLAGS_1(load_dpdes, TCG_CALL_NO_RWG, tl, env)
> +DEF_HELPER_FLAGS_2(store_dpdes, TCG_CALL_NO_RWG, void, env, tl)
>  #endif
>  DEF_HELPER_2(store_sdr1, void, env, tl)
>  DEF_HELPER_2(store_pidr, void, env, tl)
> diff --git a/target/ppc/misc_helper.c b/target/ppc/misc_helper.c
> index c65d1ade15..d7d4acca7f 100644
> --- a/target/ppc/misc_helper.c
> +++ b/target/ppc/misc_helper.c
> @@ -39,6 +39,17 @@ void helper_store_dump_spr(CPUPPCState *env, uint32_t sprn)
>  }
>  
>  #ifdef TARGET_PPC64
> +static void raise_hv_fu_exception(CPUPPCState *env, uint32_t bit,
> +                                  uint32_t sprn, uint32_t cause,
> +                                  uintptr_t raddr)
> +{
> +    qemu_log("Facility SPR %d is unavailable (SPR HFSCR:%d)\n", sprn, bit);
> +
> +    env->spr[SPR_HFSCR] &= ~((target_ulong)FSCR_IC_MASK << FSCR_IC_POS);
> +
> +    raise_exception_err_ra(env, POWERPC_EXCP_HV_FU, cause, raddr);
> +}
> +
>  static void raise_fu_exception(CPUPPCState *env, uint32_t bit,
>                                 uint32_t sprn, uint32_t cause,
>                                 uintptr_t raddr)
> @@ -53,6 +64,17 @@ static void raise_fu_exception(CPUPPCState *env, uint32_t bit,
>  }
>  #endif
>  
> +void helper_hfscr_facility_check(CPUPPCState *env, uint32_t bit,
> +                                 uint32_t sprn, uint32_t cause)
> +{
> +#ifdef TARGET_PPC64
> +    if ((env->msr_mask & MSR_HVB) && !msr_hv &&
> +                                     !(env->spr[SPR_HFSCR] & (1UL << bit))) {
> +        raise_hv_fu_exception(env, bit, sprn, cause, GETPC());
> +    }
> +#endif
> +}
> +
>  void helper_fscr_facility_check(CPUPPCState *env, uint32_t bit,
>                                  uint32_t sprn, uint32_t cause)
>  {
> @@ -107,6 +129,30 @@ void helper_store_pcr(CPUPPCState *env, target_ulong value)
>  
>      env->spr[SPR_PCR] = value & pcc->pcr_mask;
>  }
> +
> +target_ulong helper_load_dpdes(CPUPPCState *env)
> +{
> +    helper_hfscr_facility_check(env, HFSCR_MSGSNDP, SPR_DPDES,
> +                                HFSCR_IC_MSGSNDP);
> +
> +    if (env->pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL))
> +        return 1;
> +    return 0;
> +}
> +
> +void helper_store_dpdes(CPUPPCState *env, target_ulong val)
> +{
> +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
> +    CPUState *cs = CPU(cpu);
> +
> +    if (val) {
> +        /* Only one cpu for now */
> +        env->pending_interrupts |= 1 << PPC_INTERRUPT_DOORBELL;
> +        cpu_interrupt(cs, CPU_INTERRUPT_HARD);
> +    } else {
> +        env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
> +    }
> +}
>  #endif /* defined(TARGET_PPC64) */
>  
>  void helper_store_pidr(CPUPPCState *env, target_ulong val)
> diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> index fb42585a1c..2c3e83d18e 100644
> --- a/target/ppc/translate.c
> +++ b/target/ppc/translate.c
> @@ -6537,6 +6537,30 @@ static void gen_msgsnd(DisasContext *ctx)
>  #endif /* defined(CONFIG_USER_ONLY) */
>  }
>  
> +static void gen_msgclrp(DisasContext *ctx)
> +{
> +#if defined(CONFIG_USER_ONLY)
> +    GEN_PRIV;
> +#else
> +    CHK_SV;
> +    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, 0,
> +                             HFSCR_IC_MSGSNDP);
> +    gen_helper_book3s_msgclrp(cpu_env, cpu_gpr[rB(ctx->opcode)]);
> +#endif /* defined(CONFIG_USER_ONLY) */
> +}
> +
> +static void gen_msgsndp(DisasContext *ctx)
> +{
> +#if defined(CONFIG_USER_ONLY)
> +    GEN_PRIV;
> +#else
> +    CHK_SV;
> +    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, 0,
> +                             HFSCR_IC_MSGSNDP);
> +    gen_helper_book3s_msgsndp(cpu_gpr[rB(ctx->opcode)]);
> +#endif /* defined(CONFIG_USER_ONLY) */
> +}
> +
>  static void gen_msgsync(DisasContext *ctx)
>  {
>  #if defined(CONFIG_USER_ONLY)
> @@ -7054,6 +7078,10 @@ GEN_HANDLER2_E(msgclr, "msgclr", 0x1F, 0x0E, 0x07, 0x03ff0001,
>                 PPC_NONE, PPC2_PRCNTL),
>  GEN_HANDLER2_E(msgsync, "msgsync", 0x1F, 0x16, 0x1B, 0x00000000,
>                 PPC_NONE, PPC2_PRCNTL),
> +GEN_HANDLER2_E(msgsndp, "msgsndp", 0x1F, 0x0E, 0x04, 0x03ff0001,
> +               PPC_NONE, PPC2_ISA207S),
> +GEN_HANDLER2_E(msgclrp, "msgclrp", 0x1F, 0x0E, 0x05, 0x03ff0001,
> +               PPC_NONE, PPC2_ISA207S),
>  GEN_HANDLER(wrtee, 0x1F, 0x03, 0x04, 0x000FFC01, PPC_WRTEE),
>  GEN_HANDLER(wrteei, 0x1F, 0x03, 0x05, 0x000E7C01, PPC_WRTEE),
>  GEN_HANDLER(dlmzb, 0x1F, 0x0E, 0x02, 0x00000000, PPC_440_SPEC),
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index 8e287066e5..46f9399097 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -454,6 +454,19 @@ static void spr_write_pcr(DisasContext *ctx, int sprn, int gprn)
>  {
>      gen_helper_store_pcr(cpu_env, cpu_gpr[gprn]);
>  }
> +
> +/* DPDES */
> +static void spr_read_dpdes(DisasContext *ctx, int gprn, int sprn)
> +{
> +    gen_hfscr_facility_check(ctx, SPR_HFSCR, HFSCR_MSGSNDP, sprn,
> +                             HFSCR_IC_MSGSNDP);
> +    gen_helper_load_dpdes(cpu_gpr[gprn], cpu_env);
> +}
> +
> +static void spr_write_dpdes(DisasContext *ctx, int sprn, int gprn)
> +{
> +    gen_helper_store_dpdes(cpu_env, cpu_gpr[gprn]);
> +}
>  #endif
>  #endif
>  
> @@ -7478,6 +7491,20 @@ POWERPC_FAMILY(e600)(ObjectClass *oc, void *data)
>  #define POWERPC970_HID5_INIT 0x00000000
>  #endif
>  
> +void gen_hfscr_facility_check(DisasContext *ctx, int facility_sprn, int bit,
> +                              int sprn, int cause)
> +{
> +    TCGv_i32 t1 = tcg_const_i32(bit);
> +    TCGv_i32 t2 = tcg_const_i32(sprn);
> +    TCGv_i32 t3 = tcg_const_i32(cause);
> +
> +    gen_helper_hfscr_facility_check(cpu_env, t1, t2, t3);
> +
> +    tcg_temp_free_i32(t3);
> +    tcg_temp_free_i32(t2);
> +    tcg_temp_free_i32(t1);
> +}
> +
>  static void gen_fscr_facility_check(DisasContext *ctx, int facility_sprn,
>                                      int bit, int sprn, int cause)
>  {
> @@ -8249,6 +8276,17 @@ static void gen_spr_power8_rpr(CPUPPCState *env)
>  #endif
>  }
>  
> +static void gen_spr_power8_dpdes(CPUPPCState *env)
> +{
> +#if !defined(CONFIG_USER_ONLY)
> +    spr_register_kvm_hv(env, SPR_DPDES, "DPDES",
> +                        SPR_NOACCESS, SPR_NOACCESS,
> +                        &spr_read_dpdes, SPR_NOACCESS,
> +                        &spr_read_dpdes, &spr_write_dpdes,
> +                        KVM_REG_PPC_DPDES, 0x0UL);
> +#endif
> +}
> +
>  static void gen_spr_power9_mmu(CPUPPCState *env)
>  {
>  #if !defined(CONFIG_USER_ONLY)
> @@ -8637,6 +8675,7 @@ static void init_proc_POWER8(CPUPPCState *env)
>      gen_spr_power8_ic(env);
>      gen_spr_power8_book4(env);
>      gen_spr_power8_rpr(env);
> +    gen_spr_power8_dpdes(env);
>  
>      /* env variables */
>      env->dcache_line_size = 128;
> @@ -8826,6 +8865,7 @@ static void init_proc_POWER9(CPUPPCState *env)
>      gen_spr_power8_ic(env);
>      gen_spr_power8_book4(env);
>      gen_spr_power8_rpr(env);
> +    gen_spr_power8_dpdes(env);
>      gen_spr_power9_mmu(env);
>  
>      /* POWER9 Specific registers */

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 06/13] target/ppc: Enforce that the root page directory size must be at least 5
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  2:11   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  2:11 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 1214 bytes --]

On Fri, May 03, 2019 at 03:53:09PM +1000, Suraj Jitindar Singh wrote:
> According to the ISA the root page directory size of a radix tree for
> either process or partition scoped translation must be >= 5.
> 
> Thus add this to the list of conditions checked when validating the
> partition table entry in validate_pate();
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  target/ppc/mmu-radix64.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index a6ab290323..afa5ba506a 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -249,6 +249,8 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>      if (lpid == 0 && !msr_hv) {
>          return false;
>      }
> +    if ((pate->dw0 & PATE1_R_PRTS) < 5)
> +        return false;
>      /* More checks ... */
>      return true;
>  }

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 07/13] target/ppc: Handle partition scoped radix tree translation
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  2:28   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  2:28 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 24447 bytes --]

On Fri, May 03, 2019 at 03:53:10PM +1000, Suraj Jitindar Singh wrote:
> Radix tree translation is a 2 step process:
> 
> Process Scoped Translation:
> Effective Address (EA) -> Virtual Address (VA)
> 
> Paritition Scoped Translation:
> Virtual Address (VA) -> Real Address (RA)
> 
> Performed based on:
>                                       MSR[HV]
>            -----------------------------------------------
>            |             |     HV = 0    |     HV = 1    |
>            -----------------------------------------------
>            | Relocation  |   Partition   |      No       |
>            | = Off       |    Scoped     |  Translation  |
> Relocation -----------------------------------------------
>            | Relocation  |  Partition &  |    Process    |
>            | = On        |Process Scoped |    Scoped     |
>            -----------------------------------------------
> 
> Currently only process scoped translation is handled.
> Implement partitition scoped translation.
> 
> The process of using the radix trees to perform partition scoped
> translation is identical to process scoped translation, however
> hypervisor exceptions are generated, and thus we can reuse the radix
> tree traversing code.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> ---
>  target/ppc/cpu.h         |   2 +
>  target/ppc/excp_helper.c |   3 +-
>  target/ppc/mmu-radix64.c | 407 +++++++++++++++++++++++++++++++++--------------
>  3 files changed, 293 insertions(+), 119 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 1d2a088391..3acc248f40 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -501,6 +501,8 @@ typedef struct ppc_v3_pate_t {
>  /* Unsupported Radix Tree Configuration */
>  #define DSISR_R_BADCONFIG        0x00080000
>  #define DSISR_ATOMIC_RC          0x00040000
> +/* Unable to translate address of (guest) pde or process/page table entry */
> +#define DSISR_PRTABLE_FAULT      0x00020000
>  
>  /* SRR1 error code fields */
>  
> diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
> index 7a4da7bdba..10091d4624 100644
> --- a/target/ppc/excp_helper.c
> +++ b/target/ppc/excp_helper.c
> @@ -441,9 +441,10 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
>      case POWERPC_EXCP_ISEG:      /* Instruction segment exception            */
>      case POWERPC_EXCP_TRACE:     /* Trace exception                          */
>          break;
> +    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
> +        msr |= env->error_code;
>      case POWERPC_EXCP_HDECR:     /* Hypervisor decrementer exception         */
>      case POWERPC_EXCP_HDSI:      /* Hypervisor data storage exception        */
> -    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
>      case POWERPC_EXCP_HDSEG:     /* Hypervisor data segment exception        */
>      case POWERPC_EXCP_HISEG:     /* Hypervisor instruction segment exception */
>      case POWERPC_EXCP_SDOOR_HV:  /* Hypervisor Doorbell interrupt            */
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index afa5ba506a..6118ad1b00 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -112,9 +112,31 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
>      }
>  }
>  
> +static void ppc_radix64_raise_hsi(PowerPCCPU *cpu, int rwx, vaddr eaddr,
> +                                  hwaddr g_raddr, uint32_t cause)
> +{
> +    CPUState *cs = CPU(cpu);
> +    CPUPPCState *env = &cpu->env;
> +
> +    if (rwx == 2) { /* H Instruction Storage Interrupt */
> +        cs->exception_index = POWERPC_EXCP_HISI;
> +        env->spr[SPR_ASDR] = g_raddr;
> +        env->error_code = cause;
> +    } else { /* H Data Storage Interrupt */
> +        cs->exception_index = POWERPC_EXCP_HDSI;
> +        if (rwx == 1) { /* Write -> Store */
> +            cause |= DSISR_ISSTORE;
> +        }
> +        env->spr[SPR_HDSISR] = cause;
> +        env->spr[SPR_HDAR] = eaddr;
> +        env->spr[SPR_ASDR] = g_raddr;
> +        env->error_code = 0;
> +    }
> +}
>  
>  static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
> -                                   int *fault_cause, int *prot)
> +                                   int *fault_cause, int *prot,
> +                                   bool partition_scoped)
>  {
>      CPUPPCState *env = &cpu->env;
>      const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
> @@ -130,11 +152,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
>      }
>  
>      /* Determine permissions allowed by Encoded Access Authority */
> -    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
> +    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
>          *prot = 0;
> -    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
> +    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
>          *prot = ppc_radix64_get_prot_eaa(pte);
> -    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
> +    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
>          *prot = ppc_radix64_get_prot_eaa(pte);
>          *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
>      }
> @@ -199,44 +221,196 @@ static uint64_t ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte, hwadd
>      return npte;
>  }
>  
> -static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
> -                                      uint64_t base_addr, uint64_t nls,
> -                                      hwaddr *raddr, int *psize,
> -                                      int *fault_cause, hwaddr *pte_addr)
> +static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
> +                                       uint64_t *pte_addr, uint64_t *nls,
> +                                       int *psize, int *fault_cause)
>  {
>      CPUState *cs = CPU(cpu);
>      uint64_t index, pde;
>  
> -    if (nls < 5) { /* Directory maps less than 2**5 entries */
> +    if (*nls < 5) { /* Directory maps less than 2**5 entries */
>          *fault_cause |= DSISR_R_BADCONFIG;
>          return 0;
>      }
>  
>      /* Read page <directory/table> entry from guest address space */
> -    index = eaddr >> (*psize - nls); /* Shift */
> -    index &= ((1UL << nls) - 1); /* Mask */
> -    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
> -    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
> +    pde = ldq_phys(cs->as, *pte_addr);
> +    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
>          *fault_cause |= DSISR_NOPTE;
>          return 0;
>      }
>  
> -    *psize -= nls;
> +    *psize -= *nls;
> +    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
> +        *nls = pde & R_PDE_NLS;
> +        index = eaddr >> (*psize - *nls);       /* Shift */
> +        index &= ((1UL << *nls) - 1);           /* Mask */
> +        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
> +    }
> +    return pde;
> +}
> +
> +static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
> +                                      uint64_t base_addr, uint64_t nls,
> +                                      hwaddr *raddr, int *psize,
> +                                      int *fault_cause, hwaddr *pte_addr)
> +{
> +    uint64_t index, pde;
> +
> +    index = eaddr >> (*psize - nls);    /* Shift */
> +    index &= ((1UL << nls) - 1);       /* Mask */
> +    *pte_addr = base_addr + (index * sizeof(pde));
> +    do {
> +        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
> +                                     fault_cause);
> +    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
>  
> -    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
> -    if (pde & R_PTE_LEAF) {
> +    /* Did we find a valid leaf? */
> +    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
>          uint64_t rpn = pde & R_PTE_RPN;
>          uint64_t mask = (1UL << *psize) - 1;
>  
>          /* Or high bits of rpn and low bits to ea to form whole real addr */
>          *raddr = (rpn & ~mask) | (eaddr & mask);
> -        *pte_addr = base_addr + (index * sizeof(pde));
> -        return pde;
>      }
>  
> -    /* Next Level of Radix Tree */
> -    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
> -                                 raddr, psize, fault_cause, pte_addr);
> +    return pde;
> +}
> +
> +static int ppc_radix64_partition_scoped_xlate(PowerPCCPU *cpu, int rwx,
> +                                              vaddr eaddr, hwaddr g_raddr,
> +                                              ppc_v3_pate_t pate,
> +                                              hwaddr *h_raddr, int *h_prot,
> +                                              int *h_page_size, bool pde_addr,
> +                                              bool cause_excp)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    int fault_cause = 0;
> +    hwaddr pte_addr;
> +    uint64_t pte;
> +
> +restart:
> +    *h_page_size = PRTBE_R_GET_RTS(pate.dw0);
> +    pte = ppc_radix64_walk_tree(cpu, g_raddr, pate.dw0 & PRTBE_R_RPDB,
> +                                pate.dw0 & PRTBE_R_RPDS, h_raddr, h_page_size,
> +                                &fault_cause, &pte_addr);
> +    /* No valid pte or access denied due to protection */
> +    if (!(pte & R_PTE_VALID) ||
> +            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, h_prot, 1)) {
> +        if (pde_addr) /* address being translated was that of a guest pde */
> +            fault_cause |= DSISR_PRTABLE_FAULT;
> +        if (cause_excp)
> +            ppc_radix64_raise_hsi(cpu, rwx, eaddr, g_raddr, fault_cause);
> +        return 1;
> +    }
> +
> +    /* Update Reference and Change Bits */
> +    if (ppc_radix64_hw_rc_updates(env)) {
> +        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
> +        if (!pte) {
> +            goto restart;
> +        }
> +    }
> +
> +    /* If the page doesn't have C, treat it as read only */
> +    if (!(pte & R_PTE_C))
> +        *h_prot &= ~PAGE_WRITE;
> +
> +    return 0;
> +}
> +
> +static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
> +                                            vaddr eaddr, uint64_t lpid, uint64_t pid,
> +                                            ppc_v3_pate_t pate, hwaddr *g_raddr,
> +                                            int *g_prot, int *g_page_size,
> +                                            bool cause_excp)
> +{
> +    CPUState *cs = CPU(cpu);
> +    CPUPPCState *env = &cpu->env;
> +    uint64_t offset, size, prtbe_addr, prtbe0, base_addr, nls, index, pte;
> +    int fault_cause = 0, h_page_size, h_prot, ret;
> +    hwaddr h_raddr, pte_addr;
> +
> +    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> +    offset = pid * sizeof(struct prtb_entry);
> +    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> +    if (offset >= size) {
> +        /* offset exceeds size of the process table */
> +        if (cause_excp)
> +            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> +        return 1;
> +    }
> +    prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
> +    /* address subject to partition scoped translation */
> +    if (cpu->vhyp && (lpid == 0)) {
> +        prtbe0 = ldq_phys(cs->as, prtbe_addr);
> +    } else {
> +        ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, prtbe_addr,
> +                                                 pate, &h_raddr, &h_prot,
> +                                                 &h_page_size, 1, 1);
> +        if (ret)
> +            return ret;
> +        prtbe0 = ldq_phys(cs->as, h_raddr);
> +    }
> +
> +    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> +restart:
> +    *g_page_size = PRTBE_R_GET_RTS(prtbe0);
> +    base_addr = prtbe0 & PRTBE_R_RPDB;
> +    nls = prtbe0 & PRTBE_R_RPDS;
> +    if (msr_hv || (cpu->vhyp && (lpid == 0))) {
> +        /* Can treat process tree addresses as real addresses */
> +        pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK, base_addr, nls,
> +                                    g_raddr, g_page_size, &fault_cause,
> +                                    &pte_addr);
> +    } else {
> +        index = (eaddr & R_EADDR_MASK) >> (*g_page_size - nls); /* Shift */
> +        index &= ((1UL << nls) - 1);                            /* Mask */
> +        pte_addr = base_addr + (index * sizeof(pte));
> +
> +        /* Each process tree address subject to partition scoped translation */
> +        do {
> +            ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, pte_addr,
> +                                                     pate, &h_raddr, &h_prot,
> +                                                     &h_page_size, 1, 1);
> +            if (ret)
> +                return ret;
> +
> +            pte = ppc_radix64_next_level(cpu, eaddr & R_EADDR_MASK, &h_raddr,
> +                                         &nls, g_page_size, &fault_cause);
> +            pte_addr = h_raddr;
> +        } while ((pte & R_PTE_VALID) && !(pte & R_PTE_LEAF));
> +
> +        /* Did we find a valid leaf? */
> +        if ((pte & R_PTE_VALID) && (pte & R_PTE_LEAF)) {
> +            uint64_t rpn = pte & R_PTE_RPN;
> +            uint64_t mask = (1UL << *g_page_size) - 1;
> +
> +            /* Or high bits of rpn and low bits to ea to form whole real addr */
> +            *g_raddr = (rpn & ~mask) | (eaddr & mask);
> +        }
> +    }
> +
> +    if (!(pte & R_PTE_VALID) ||
> +            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {
> +        /* No valid pte or access denied due to protection */
> +        if (cause_excp)
> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> +        return 1;
> +    }
> +
> +    /* Update Reference and Change Bits */
> +    if (ppc_radix64_hw_rc_updates(env)) {
> +        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
> +        if (!pte)
> +            goto restart;
> +    }
> +
> +    /* If the page doesn't have C, treat it as read only */
> +    if (!(pte & R_PTE_C))
> +        *g_prot &= ~PAGE_WRITE;
> +
> +    return 0;
>  }
>  
>  static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
> @@ -255,22 +429,99 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>      return true;
>  }
>  
> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> +                             uint64_t lpid, uint64_t pid, bool relocation,
> +                             hwaddr *raddr, int *psizep, int *protp,
> +                             bool cause_excp)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    ppc_v3_pate_t pate;
> +    int psize, prot;
> +    hwaddr g_raddr;
> +
> +    *psizep = INT_MAX;
> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
> +
> +    /* Get Process Table */
> +    if (cpu->vhyp && (lpid == 0)) {
> +        PPCVirtualHypervisorClass *vhc;
> +        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> +        vhc->get_pate(cpu->vhyp, &pate);
> +    } else {
> +        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> +            if (cause_excp)
> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> +            return 1;
> +        }
> +        if (!validate_pate(cpu, lpid, &pate)) {
> +            if (cause_excp)
> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> +            return 1;
> +        }
> +    }
> +
> +    /*
> +     * Radix tree translation is a 2 step translation process:
> +     * 1. Process Scoped translation - Guest Eff Addr -> Guest Real Addr
> +     * 2. Partition Scoped translation - Guest Real Addr -> Host Real Addr
> +     *
> +     *                                       MSR[HV]
> +     *            -----------------------------------------------
> +     *            |             |     HV = 0    |     HV = 1    |
> +     *            -----------------------------------------------
> +     *            | Relocation  |   Partition   |      No       |
> +     *            | = Off       |    Scoped     |  Translation  |
> +     * Relocation -----------------------------------------------
> +     *            | Relocation  |  Partition &  |    Process    |
> +     *            | = On        |Process Scoped |    Scoped     |
> +     *            -----------------------------------------------
> +     */
> +
> +    /* Perform process scoped translation if relocation enabled */
> +    if (relocation) {
> +        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
> +                                                   pate, &g_raddr, &prot,
> +                                                   &psize, cause_excp);
> +        if (ret)
> +            return ret;
> +        *psizep = MIN(*psizep, psize);
> +        *protp &= prot;
> +    } else {
> +        g_raddr = eaddr & R_EADDR_MASK;
> +    }
> +
> +    /* Perform partition scoped xlate if !HV or HV access to quadrants 1 or 2 */

I'm not seeing any test on the quadrant below.

> +    if ((lpid != 0) || (!cpu->vhyp && !msr_hv)) {
> +        int ret = ppc_radix64_partition_scoped_xlate(cpu, rwx, eaddr, g_raddr,
> +                                                     pate, raddr, &prot, &psize,
> +                                                     0, cause_excp);
> +        if (ret)
> +            return ret;
> +        *psizep = MIN(*psizep, psize);
> +        *protp &= prot;
> +    } else {
> +        *raddr = g_raddr;
> +    }
> +
> +    return 0;
> +}
> +
>  int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>                                   int mmu_idx)
>  {
>      CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
> -    PPCVirtualHypervisorClass *vhc;
> -    hwaddr raddr, pte_addr;
> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> -    int page_size, prot, fault_cause = 0;
> -    ppc_v3_pate_t pate;
> +    uint64_t pid, lpid = env->spr[SPR_LPIDR];
> +    int psize, prot;
> +    bool relocation;
> +    hwaddr raddr;
>  
> +    assert(!(msr_hv && cpu->vhyp));
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>  
> +    relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
>      /* HV or virtual hypervisor Real Mode Access */
> -    if ((msr_hv || cpu->vhyp) &&
> -        (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0)))) {
> +    if (!relocation && (msr_hv || (cpu->vhyp && (lpid == 0)))) {
>          /* In real mode top 4 effective addr bits (mostly) ignored */
>          raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
>  
> @@ -294,75 +545,26 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>          return 1;
>      }
>  
> -    /* Get Process Table */
> -    if (cpu->vhyp) {
> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> -        vhc->get_pate(cpu->vhyp, &pate);
> -    } else {
> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> -            return 1;
> -        }
> -        if (!validate_pate(cpu, lpid, &pate)) {
> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> -        }
> -        /* We don't support guest mode yet */
> -        if (lpid != 0) {
> -            error_report("PowerNV guest support Unimplemented");
> -            exit(1);
> -       }
> -    }
> -
> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> -    offset = pid * sizeof(struct prtb_entry);
> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> -    if (offset >= size) {
> -        /* offset exceeds size of the process table */
> -        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> -        return 1;
> -    }
> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> -
> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> - restart:
> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> -    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
> -        /* Couldn't get pte or access denied due to protection */
> -        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> +    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
> +    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
> +                          &psize, &prot, 1)) {
>          return 1;
>      }
>  
> -    /* Update Reference and Change Bits */
> -    if (ppc_radix64_hw_rc_updates(env)) {
> -        pte = ppc_radix64_set_rc(cpu, rwx, pte, pte_addr);
> -        if (!pte) {
> -            goto restart;
> -        }
> -    }
> -    /* If the page doesn't have C, treat it as read only */
> -    if (!(pte & R_PTE_C)) {
> -        prot &= ~PAGE_WRITE;
> -    }
>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
> -                 prot, mmu_idx, 1UL << page_size);
> +                 prot, mmu_idx, 1UL << psize);
>      return 0;
>  }
>  
>  hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>  {
> -    CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
> -    PPCVirtualHypervisorClass *vhc;
> -    hwaddr raddr, pte_addr;
> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> -    int page_size, fault_cause = 0;
> -    ppc_v3_pate_t pate;
> +    uint64_t lpid = 0, pid = 0;
> +    int psize, prot;
> +    hwaddr raddr;
>  
>      /* Handle Real Mode */
> -    if (msr_dr == 0) {
> +    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && (lpid == 0)))) {
>          /* In real mode top 4 effective addr bits (mostly) ignored */
>          return eaddr & 0x0FFFFFFFFFFFFFFFULL;
>      }
> @@ -372,39 +574,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>          return -1;
>      }
>  
> -    /* Get Process Table */
> -    if (cpu->vhyp) {
> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> -        vhc->get_pate(cpu->vhyp, &pate);
> -    } else {
> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> -            return -1;
> -        }
> -        if (!validate_pate(cpu, lpid, &pate)) {
> -            return -1;
> -        }
> -        /* We don't support guest mode yet */
> -        if (lpid != 0) {
> -            error_report("PowerNV guest support Unimplemented");
> -            exit(1);
> -       }
> -    }
> -
> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> -    offset = pid * sizeof(struct prtb_entry);
> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> -    if (offset >= size) {
> -        /* offset exceeds size of the process table */
> -        return -1;
> -    }
> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> -
> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> -    if (!pte) {
> +    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
> +                          &prot, 0)) {
>          return -1;
>      }
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 08/13] target/ppc: Implement hcall H_SET_PARTITION_TABLE
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  2:30   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  2:30 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 3046 bytes --]

On Fri, May 03, 2019 at 03:53:11PM +1000, Suraj Jitindar Singh wrote:
> The hcall H_SET_PARTITION_TABLE is used by a guest acting as a nested
> hypervisor to register the partition table entry for one of its guests
> with the real hypervisor.
> 
> Implement this hcall for a spapr guest.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> ---
>  hw/ppc/spapr_hcall.c   | 22 ++++++++++++++++++++++
>  include/hw/ppc/spapr.h |  4 +++-
>  2 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 4d7fe337a1..704ceff8e1 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1828,6 +1828,25 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, SpaprMachineState *spapr,
>      return H_SUCCESS;
>  }
>  
> +static target_ulong h_set_partition_table(PowerPCCPU *cpu,
> +                                          SpaprMachineState *spapr,
> +                                          target_ulong opcode,
> +                                          target_ulong *args)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    target_ulong ptcr = args[0];
> +
> +    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
> +        return H_FUNCTION;
> +    }
> +
> +    if ((ptcr & PTCR_PATS) > 24)
> +        return H_PARAMETER;
> +
> +    env->spr[SPR_PTCR] = ptcr;
> +    return H_SUCCESS;
> +}
> +
>  static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
>  static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
>  
> @@ -1934,6 +1953,9 @@ static void hypercall_register_types(void)
>  
>      spapr_register_hypercall(KVMPPC_H_UPDATE_DT, h_update_dt);
>  
> +    /* Platform-specific hcalls used for nested HV KVM */
> +    spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
> +
>      /* Virtual Processor Home Node */
>      spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
>                               h_home_node_associativity);
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 4251215908..e591ee0ba0 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -501,7 +501,9 @@ struct SpaprMachineState {
>  /* Client Architecture support */
>  #define KVMPPC_H_CAS            (KVMPPC_HCALL_BASE + 0x2)
>  #define KVMPPC_H_UPDATE_DT      (KVMPPC_HCALL_BASE + 0x3)
> -#define KVMPPC_HCALL_MAX        KVMPPC_H_UPDATE_DT
> +/* Platform-specific hcalls used for nested HV KVM */
> +#define H_SET_PARTITION_TABLE   0xF800

Urgh, vastly expanding the size of the kvmppc specific hcall table
here.  I guess that can't really be helped.

> +#define KVMPPC_HCALL_MAX        H_SET_PARTITION_TABLE
>  
>  typedef struct SpaprDeviceTreeUpdateHeader {
>      uint32_t version_id;

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 09/13] target/ppc: Implement hcall H_ENTER_NESTED
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  2:57   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  2:57 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 19074 bytes --]

On Fri, May 03, 2019 at 03:53:12PM +1000, Suraj Jitindar Singh wrote:
> The hcall H_ENTER_NESTED is used by a guest acting as a nested
> hypervisor to provide the state of one of its guests which it would
> like the real hypervisor to load onto the cpu and execute on its behalf.
> 
> The hcall takes as arguments 2 guest real addresses which provide the
> location of a regs struct and a hypervisor regs struct which provide the
> values to use to execute the guest. These are loaded into the cpu state
> and then the function returns to continue tcg execution in the new
> context. When an interrupt requires us to context switch back we restore
> the old register values and save the cpu state back into the guest
> memory.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> ---
>  hw/ppc/spapr_hcall.c     | 285 +++++++++++++++++++++++++++++++++++++++++++++++
>  include/hw/ppc/spapr.h   |   3 +-
>  target/ppc/cpu.h         |  55 +++++++++
>  target/ppc/excp_helper.c |  13 ++-
>  4 files changed, 353 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 704ceff8e1..68f3282214 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -16,6 +16,7 @@
>  #include "hw/ppc/spapr_ovec.h"
>  #include "mmu-book3s-v3.h"
>  #include "hw/mem/memory-device.h"
> +#include "hw/ppc/ppc.h"
>  
>  static bool has_spr(PowerPCCPU *cpu, int spr)
>  {
> @@ -1847,6 +1848,289 @@ static target_ulong h_set_partition_table(PowerPCCPU *cpu,
>      return H_SUCCESS;
>  }
>  
> +static void byteswap_pt_regs(struct pt_regs *regs)
> +{
> +    target_ulong *addr = (target_ulong *) regs;
> +
> +    for (; addr < ((target_ulong *) (regs + 1)); addr++) {
> +        *addr = bswap64(*addr);

Hrm.  pt_regs is defined in terms of target_ulongs, but this is
explicitly 64-bit.

> +    }
> +}
> +
> +static void byteswap_hv_regs(struct hv_guest_state *hr)

Bulk byteswapping structures like this always gives me the
heeby-jeebies.  It means whenever we have such a structure there's an
invisible bit of state: whether it is currently in originally
supplied, or "fixed" endianness at this moment.  That's not obvious to
either the compiler or future people looking at the code.  You can't
even use tools like sparse to help you, because the same type is used
for the swapped and unswapped versions.

I think it would be preferable to treat the hv_guest_state structure
as always being the L1-supplied endianness version and do the swaps
value by value at the point you transcribe from this into / out of the
qemu internal structures (host endianness).

Of course, that has its own complications since then we need to pass
what the actual endianness of the guest structure is down to those
functions.

I don't suppose there's any chance we could retcon the paravirt nested
interfaces to define these structures as always being of a fixed
endianness (I guess it would have to be LE), rather than L1 mode
dependent?

> +{
> +    hr->version = bswap64(hr->version);
> +    hr->lpid = bswap32(hr->lpid);
> +    hr->vcpu_token = bswap32(hr->vcpu_token);
> +    hr->lpcr = bswap64(hr->lpcr);
> +    hr->pcr = bswap64(hr->pcr);
> +    hr->amor = bswap64(hr->amor);
> +    hr->dpdes = bswap64(hr->dpdes);
> +    hr->hfscr = bswap64(hr->hfscr);
> +    hr->tb_offset = bswap64(hr->tb_offset);
> +    hr->dawr0 = bswap64(hr->dawr0);
> +    hr->dawrx0 = bswap64(hr->dawrx0);
> +    hr->ciabr = bswap64(hr->ciabr);
> +    hr->hdec_expiry = bswap64(hr->hdec_expiry);
> +    hr->purr = bswap64(hr->purr);
> +    hr->spurr = bswap64(hr->spurr);
> +    hr->ic = bswap64(hr->ic);
> +    hr->vtb = bswap64(hr->vtb);
> +    hr->hdar = bswap64(hr->hdar);
> +    hr->hdsisr = bswap64(hr->hdsisr);
> +    hr->heir = bswap64(hr->heir);
> +    hr->asdr = bswap64(hr->asdr);
> +    hr->srr0 = bswap64(hr->srr0);
> +    hr->srr1 = bswap64(hr->srr1);
> +    hr->sprg[0] = bswap64(hr->sprg[0]);
> +    hr->sprg[1] = bswap64(hr->sprg[1]);
> +    hr->sprg[2] = bswap64(hr->sprg[2]);
> +    hr->sprg[3] = bswap64(hr->sprg[3]);
> +    hr->pidr = bswap64(hr->pidr);
> +    hr->cfar = bswap64(hr->cfar);
> +    hr->ppr = bswap64(hr->ppr);
> +}
> +
> +static void save_regs(PowerPCCPU *cpu, struct pt_regs *regs)
> +{
> +    CPUPPCState env = cpu->env;
> +    int i;
> +
> +    for (i = 0; i < 32; i++)
> +        regs->gpr[i] = env.gpr[i];
> +    regs->nip = env.nip;
> +    regs->msr = env.msr;
> +    regs->ctr = env.ctr;
> +    regs->link = env.lr;
> +    regs->xer = env.xer;
> +    regs->ccr = 0UL;
> +    for (i = 0; i < 8; i++)
> +        regs->ccr |= ((env.crf[i] & 0xF) << ((7 - i) * 4));
> +    regs->dar = env.spr[SPR_DAR];
> +    regs->dsisr = env.spr[SPR_DSISR];
> +}
> +
> +static void save_hv_regs(PowerPCCPU *cpu, struct hv_guest_state *hv_regs)
> +{
> +    CPUPPCState env = cpu->env;
> +
> +    hv_regs->lpid = env.spr[SPR_LPIDR];
> +    hv_regs->lpcr = env.spr[SPR_LPCR];
> +    hv_regs->pcr = env.spr[SPR_PCR];
> +    hv_regs->amor = env.spr[SPR_AMOR];
> +    hv_regs->dpdes = !!(env.pending_interrupts & (1 << PPC_INTERRUPT_DOORBELL));
> +    hv_regs->hfscr = env.spr[SPR_HFSCR];
> +    hv_regs->tb_offset = env.tb_env->tb_offset;
> +    hv_regs->dawr0 = env.spr[SPR_DAWR];
> +    hv_regs->dawrx0 = env.spr[SPR_DAWRX];
> +    hv_regs->ciabr = env.spr[SPR_CIABR];
> +    hv_regs->purr = cpu_ppc_load_purr(&env);
> +    hv_regs->spurr = cpu_ppc_load_purr(&env);
> +    hv_regs->ic = env.spr[SPR_IC];
> +    hv_regs->vtb = cpu_ppc_load_vtb(&env);
> +    hv_regs->hdar = env.spr[SPR_HDAR];
> +    hv_regs->hdsisr = env.spr[SPR_HDSISR];
> +    hv_regs->asdr = env.spr[SPR_ASDR];
> +    hv_regs->srr0 = env.spr[SPR_SRR0];
> +    hv_regs->srr1 = env.spr[SPR_SRR1];
> +    hv_regs->sprg[0] = env.spr[SPR_SPRG0];
> +    hv_regs->sprg[1] = env.spr[SPR_SPRG1];
> +    hv_regs->sprg[2] = env.spr[SPR_SPRG2];
> +    hv_regs->sprg[3] = env.spr[SPR_SPRG3];
> +    hv_regs->pidr = env.spr[SPR_BOOKS_PID];
> +    hv_regs->cfar = env.cfar;
> +    hv_regs->ppr = env.spr[SPR_PPR];
> +}
> +
> +static void restore_regs(PowerPCCPU *cpu, struct pt_regs regs)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    int i;
> +
> +    for (i = 0; i < 32; i++)
> +        env->gpr[i] = regs.gpr[i];
> +    env->nip = regs.nip;
> +    ppc_store_msr(env, regs.msr);
> +    env->ctr = regs.ctr;
> +    env->lr = regs.link;
> +    env->xer = regs.xer;
> +    for (i = 0; i < 8; i++)
> +        env->crf[i] = (regs.ccr >> ((7 - i) * 4)) & 0xF;
> +    env->spr[SPR_DAR] = regs.dar;
> +    env->spr[SPR_DSISR] = regs.dsisr;
> +}
> +
> +static void restore_hv_regs(PowerPCCPU *cpu, struct hv_guest_state hv_regs)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    target_ulong lpcr_mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD
> +                                       | LPCR_LPES0 | LPCR_LPES1 | LPCR_MER;
> +
> +    env->spr[SPR_LPIDR] = hv_regs.lpid;
> +    ppc_store_lpcr(cpu, (hv_regs.lpcr & lpcr_mask) |
> +                        (env->spr[SPR_LPCR] & ~lpcr_mask));
> +    env->spr[SPR_PCR] = hv_regs.pcr;
> +    env->spr[SPR_AMOR] = hv_regs.amor;
> +    if (hv_regs.dpdes) {
> +        env->pending_interrupts |= 1 << PPC_INTERRUPT_DOORBELL;
> +        cpu_interrupt(CPU(cpu), CPU_INTERRUPT_HARD);
> +    } else {
> +        env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DOORBELL);
> +    }
> +    env->spr[SPR_HFSCR] = hv_regs.hfscr;
> +    env->spr[SPR_DAWR] = hv_regs.dawr0;
> +    env->spr[SPR_DAWRX] = hv_regs.dawrx0;
> +    env->spr[SPR_CIABR] = hv_regs.ciabr;
> +    cpu_ppc_store_purr(env, hv_regs.purr);      /* for TCG PURR == SPURR */
> +    env->spr[SPR_IC] = hv_regs.ic;
> +    cpu_ppc_store_vtb(env, hv_regs.vtb);
> +    env->spr[SPR_HDAR] = hv_regs.hdar;
> +    env->spr[SPR_HDSISR] = hv_regs.hdsisr;
> +    env->spr[SPR_ASDR] = hv_regs.asdr;
> +    env->spr[SPR_SRR0] = hv_regs.srr0;
> +    env->spr[SPR_SRR1] = hv_regs.srr1;
> +    env->spr[SPR_SPRG0] = hv_regs.sprg[0];
> +    env->spr[SPR_SPRG1] = hv_regs.sprg[1];
> +    env->spr[SPR_SPRG2] = hv_regs.sprg[2];
> +    env->spr[SPR_SPRG3] = hv_regs.sprg[3];
> +    env->spr[SPR_BOOKS_PID] = hv_regs.pidr;
> +    env->cfar = hv_regs.cfar;
> +    env->spr[SPR_PPR] = hv_regs.ppr;
> +    tlb_flush(CPU(cpu));
> +}
> +
> +static void sanitise_hv_regs(PowerPCCPU *cpu, struct hv_guest_state *hv_regs)
> +{
> +    CPUPPCState env = cpu->env;
> +
> +    /* Apply more restrictive set of facilities */
> +    hv_regs->hfscr &= ((0xFFUL << 56) | env.spr[SPR_HFSCR]);
> +
> +    /* Don't match on hypervisor address */
> +    hv_regs->dawrx0 &= ~(1UL << 2);
> +
> +    /* Don't match on hypervisor address */
> +    if ((hv_regs->ciabr & 0x3) == 0x3)
> +        hv_regs->ciabr &= ~0x3UL;
> +}
> +
> +static inline bool needs_byteswap(const CPUPPCState *env)
> +{
> +#if defined(HOST_WORDS_BIGENDIAN)
> +    return msr_le;
> +#else
> +    return !msr_le;
> +#endif
> +}
> +
> +static target_ulong h_enter_nested(PowerPCCPU *cpu, SpaprMachineState *spapr,
> +                                   target_ulong opcode, target_ulong *args)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    env->hv_ptr = args[0];
> +    env->regs_ptr = args[1];
> +    uint64_t hdec;
> +
> +    assert(env->spr[SPR_LPIDR] == 0);
> +
> +    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
> +        return H_FUNCTION;
> +    }
> +
> +    if (!env->has_hv_mode || !ppc_check_compat(cpu, CPU_POWERPC_LOGICAL_3_00, 0,
> +                                               spapr->max_compat_pvr)
> +                          || !ppc64_v3_radix(cpu)) {
> +        error_report("pseries guest support only implemented for POWER9 radix\n");
> +        return H_HARDWARE;
> +    }
> +
> +    if (!env->spr[SPR_PTCR])
> +        return H_NOT_AVAILABLE;
> +
> +    memset(&env->l1_saved_hv, 0, sizeof(env->l1_saved_hv));
> +    memset(&env->l1_saved_regs, 0, sizeof(env->l1_saved_regs));
> +
> +    /* load l2 state from l1 memory */
> +    cpu_physical_memory_read(env->hv_ptr, &env->l2_hv, sizeof(env->l2_hv));
> +    if (needs_byteswap(env)) {
> +        byteswap_hv_regs(&env->l2_hv);
> +    }
> +    if (env->l2_hv.version != 1)
> +        return H_P2;
> +    if (env->l2_hv.lpid == 0)
> +        return H_P2;
> +    if (!(env->l2_hv.lpcr & LPCR_HR)) {
> +        error_report("pseries guest support only implemented for POWER9 radix guests\n");
> +        return H_P2;
> +    }
> +
> +    cpu_physical_memory_read(env->regs_ptr, &env->l2_regs, sizeof(env->l2_regs));
> +    if (needs_byteswap(env)) {
> +        byteswap_pt_regs(&env->l2_regs);
> +    }
> +
> +    /* save l1 values of things */
> +    save_regs(cpu, &env->l1_saved_regs);
> +    save_hv_regs(cpu, &env->l1_saved_hv);
> +
> +    /* adjust for timebase */
> +    hdec = env->l2_hv.hdec_expiry - cpu_ppc_load_tbl(env);
> +    env->tb_env->tb_offset += env->l2_hv.tb_offset;
> +    /* load l2 values of things */
> +    sanitise_hv_regs(cpu, &env->l2_hv);
> +    restore_regs(cpu, env->l2_regs);
> +    env->msr &= ~MSR_HVB;
> +    restore_hv_regs(cpu, env->l2_hv);
> +    cpu_ppc_store_hdecr(env, hdec);
> +
> +    assert(env->spr[SPR_LPIDR] != 0);
> +
> +    return env->gpr[3];
> +}
> +
> +void h_exit_nested(PowerPCCPU *cpu)

I'd prefer to call this something different, since it's not actually
invoked as an hcall.

> +{
> +    CPUPPCState *env = &cpu->env;
> +    uint64_t delta_purr, delta_ic, delta_vtb;
> +    target_ulong trap = env->nip;
> +
> +    assert(env->spr[SPR_LPIDR] != 0);
> +
> +    /* save l2 values of things */
> +    if (trap == 0x100 || trap == 0x200 || trap == 0xc00) {
> +        env->nip = env->spr[SPR_SRR0];
> +        env->msr = env->spr[SPR_SRR1];
> +    } else {
> +        env->nip = env->spr[SPR_HSRR0];
> +        env->msr = env->spr[SPR_HSRR1];
> +    }
> +    save_regs(cpu, &env->l2_regs);
> +    delta_purr = cpu_ppc_load_purr(env) - env->l2_hv.purr;
> +    delta_ic = env->spr[SPR_IC] - env->l2_hv.ic;
> +    delta_vtb = cpu_ppc_load_vtb(env) - env->l2_hv.vtb;
> +    save_hv_regs(cpu, &env->l2_hv);
> +
> +    /* restore l1 state */
> +    restore_regs(cpu, env->l1_saved_regs);
> +    env->tb_env->tb_offset = env->l1_saved_hv.tb_offset;
> +    env->l1_saved_hv.purr += delta_purr;
> +    env->l1_saved_hv.ic += delta_ic;
> +    env->l1_saved_hv.vtb += delta_vtb;
> +    restore_hv_regs(cpu, env->l1_saved_hv);
> +
> +    /* save l2 state back to l1 memory */
> +    if (needs_byteswap(env)) {
> +        byteswap_hv_regs(&env->l2_hv);
> +        byteswap_pt_regs(&env->l2_regs);
> +    }
> +    cpu_physical_memory_write(env->hv_ptr, &env->l2_hv, sizeof(env->l2_hv));
> +    cpu_physical_memory_write(env->regs_ptr, &env->l2_regs, sizeof(env->l2_regs));
> +
> +    assert(env->spr[SPR_LPIDR] == 0);
> +
> +    env->gpr[3] = trap;
> +}
> +
>  static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
>  static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
>  
> @@ -1955,6 +2239,7 @@ static void hypercall_register_types(void)
>  
>      /* Platform-specific hcalls used for nested HV KVM */
>      spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
> +    spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
>  
>      /* Virtual Processor Home Node */
>      spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index e591ee0ba0..7083dea9ef 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -503,7 +503,8 @@ struct SpaprMachineState {
>  #define KVMPPC_H_UPDATE_DT      (KVMPPC_HCALL_BASE + 0x3)
>  /* Platform-specific hcalls used for nested HV KVM */
>  #define H_SET_PARTITION_TABLE   0xF800
> -#define KVMPPC_HCALL_MAX        H_SET_PARTITION_TABLE
> +#define H_ENTER_NESTED          0xF804
> +#define KVMPPC_HCALL_MAX        H_ENTER_NESTED
>  
>  typedef struct SpaprDeviceTreeUpdateHeader {
>      uint32_t version_id;
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 3acc248f40..426015c9cd 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -982,6 +982,54 @@ struct ppc_radix_page_info {
>  #define PPC_CPU_OPCODES_LEN          0x40
>  #define PPC_CPU_INDIRECT_OPCODES_LEN 0x20
>  
> +struct pt_regs {
> +    target_ulong gpr[32];
> +    target_ulong nip;
> +    target_ulong msr;
> +    target_ulong orig_gpr3;
> +    target_ulong ctr;
> +    target_ulong link;
> +    target_ulong xer;
> +    target_ulong ccr;
> +    target_ulong softe;
> +    target_ulong trap;
> +    target_ulong dar;
> +    target_ulong dsisr;
> +    target_ulong result;
> +};
> +
> +struct hv_guest_state {
> +    uint64_t version;            /* version of this structure layout */
> +    uint32_t lpid;
> +    uint32_t vcpu_token;
> +    /* These registers are hypervisor privileged (at least for writing) */
> +    uint64_t lpcr;
> +    uint64_t pcr;
> +    uint64_t amor;
> +    uint64_t dpdes;
> +    uint64_t hfscr;
> +    int64_t  tb_offset;
> +    uint64_t dawr0;
> +    uint64_t dawrx0;
> +    uint64_t ciabr;
> +    uint64_t hdec_expiry;
> +    uint64_t purr;
> +    uint64_t spurr;
> +    uint64_t ic;
> +    uint64_t vtb;
> +    uint64_t hdar;
> +    uint64_t hdsisr;
> +    uint64_t heir;
> +    uint64_t asdr;
> +    /* These are OS privileged but need to be set late in guest entry */
> +    uint64_t srr0;
> +    uint64_t srr1;
> +    uint64_t sprg[4];
> +    uint64_t pidr;
> +    uint64_t cfar;
> +    uint64_t ppr;
> +};

Could you get either or both of these structure definitions from the
imported kernel headers, rather than recreating them?

> +
>  struct CPUPPCState {
>      /* First are the most commonly used resources
>       * during translated code execution
> @@ -1184,6 +1232,11 @@ struct CPUPPCState {
>      uint32_t tm_vscr;
>      uint64_t tm_dscr;
>      uint64_t tm_tar;
> +
> +    /* used to store register state when running a nested kvm guest */
> +    target_ulong hv_ptr, regs_ptr;
> +    struct hv_guest_state l2_hv, l1_saved_hv;
> +    struct pt_regs l2_regs, l1_saved_regs;

I don't love adding this large chunk of data to the general cpu state
structure that's only useful on one machine type in limited circumstances.

>  };
>  
>  #define SET_FIT_PERIOD(a_, b_, c_, d_)          \
> @@ -2647,4 +2700,6 @@ static inline ppc_avr_t *cpu_avr_ptr(CPUPPCState *env, int i)
>  void dump_mmu(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env);
>  
>  void ppc_maybe_bswap_register(CPUPPCState *env, uint8_t *mem_buf, int len);
> +
> +void h_exit_nested(PowerPCCPU *cpu);
>  #endif /* PPC_CPU_H */
> diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
> index 10091d4624..9470c02512 100644
> --- a/target/ppc/excp_helper.c
> +++ b/target/ppc/excp_helper.c
> @@ -347,7 +347,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
>          env->nip += 4;
>  
>          /* "PAPR mode" built-in hypercall emulation */
> -        if ((lev == 1) && cpu->vhyp) {
> +        if ((lev == 1) && (cpu->vhyp && (env->spr[SPR_LPIDR] == 0))) {

This change doesn't quite make sense to me.  If cpu->vhyp is set, true
HV mode essentially doesn't exist on the vcpu, so it doesn't make
sense to process an hc instruction any other way than talking to the vhyp.


>              PPCVirtualHypervisorClass *vhc =
>                  PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>              vhc->hypercall(cpu->vhyp, cpu);
> @@ -664,7 +664,7 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
>      env->spr[srr1] = msr;
>  
>      /* Sanity check */
> -    if (!(env->msr_mask & MSR_HVB)) {
> +    if (!(env->msr_mask & MSR_HVB) && (env->spr[SPR_LPIDR] == 0)) {
>          if (new_msr & MSR_HVB) {
>              cpu_abort(cs, "Trying to deliver HV exception (MSR) %d with "
>                        "no HV support\n", excp);
> @@ -770,6 +770,15 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
>      /* Reset the reservation */
>      env->reserve_addr = -1;
>  
> +    if ((!(env->msr_mask & MSR_HVB) && (new_msr & MSR_HVB))) {
> +        /*
> +         * We were in a guest, but this interrupt is setting the MSR[HV] bit
> +         * meaning we want to handle this at l1. Call h_exit_nested to context
> +         * switch back.
> +         */
> +        h_exit_nested(cpu);
> +    }
> +
>      /* Any interrupt is context synchronizing, check if TCG TLB
>       * needs a delayed flush on ppc64
>       */

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 10/13] target/ppc: Implement hcall H_TLB_INVALIDATE
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  6:28   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  6:28 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 3361 bytes --]

On Fri, May 03, 2019 at 03:53:13PM +1000, Suraj Jitindar Singh wrote:
> The hcall H_TLB_INVALIDATE is used by a guest acting as a nested
> hypervisor to perform partition scoped tlb invalidation since these
> instructions are hypervisor privileged.
> 
> Check the arguments are valid and then invalidate the entire tlb since
> this is about all we can do in tcg.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/ppc/spapr_hcall.c   | 28 ++++++++++++++++++++++++++++
>  include/hw/ppc/spapr.h |  3 ++-
>  2 files changed, 30 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 68f3282214..a84d5e2163 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -2131,6 +2131,33 @@ void h_exit_nested(PowerPCCPU *cpu)
>      env->gpr[3] = trap;
>  }
>  
> +static target_ulong h_nested_tlb_invalidate(PowerPCCPU *cpu,
> +                                            SpaprMachineState *spapr,
> +                                            target_ulong opcode,
> +                                            target_ulong *args)
> +{
> +    target_ulong instr = args[0];
> +    target_ulong rbval = args[2];
> +    int r, ric, prs, is;
> +
> +    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
> +        return H_FUNCTION;
> +    }
> +
> +    ric = (instr >> 18) & 0x3;
> +    prs = (instr >> 17) & 0x1;
> +    r = (instr >> 16) & 0x1;
> +    is = (rbval >> 10) & 0x3;
> +
> +    if ((!r) || (prs) || (ric == 3) || (is == 1) || ((!is) && (ric == 1 ||
> +                                                               ric == 2)))
> +        return H_PARAMETER;
> +
> +    /* Invalidate everything, not much else we can do */
> +    cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
> +    return H_SUCCESS;
> +}
> +
>  static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
>  static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
>  
> @@ -2240,6 +2267,7 @@ static void hypercall_register_types(void)
>      /* Platform-specific hcalls used for nested HV KVM */
>      spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
>      spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
> +    spapr_register_hypercall(H_TLB_INVALIDATE, h_nested_tlb_invalidate);
>  
>      /* Virtual Processor Home Node */
>      spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 7083dea9ef..6a614c445f 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -504,7 +504,8 @@ struct SpaprMachineState {
>  /* Platform-specific hcalls used for nested HV KVM */
>  #define H_SET_PARTITION_TABLE   0xF800
>  #define H_ENTER_NESTED          0xF804
> -#define KVMPPC_HCALL_MAX        H_ENTER_NESTED
> +#define H_TLB_INVALIDATE        0xF808
> +#define KVMPPC_HCALL_MAX        H_TLB_INVALIDATE
>  
>  typedef struct SpaprDeviceTreeUpdateHeader {
>      uint32_t version_id;

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 11/13] target/ppc: Implement hcall H_COPY_TOFROM_GUEST
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  6:32   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  6:32 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 6538 bytes --]

On Fri, May 03, 2019 at 03:53:14PM +1000, Suraj Jitindar Singh wrote:
> The hcall H_COPY_TOFROM_GUEST of used by a guest acting as a nested
> hypervisor to access quadrants since quadrant access is hypervisor
> privileged.
> 
> Translate the guest address to be accessed, map the memory and perform
> the access on behalf of the guest. If the parameters are invalid, the
> address can't be translated or the memory cannot be mapped then fail
> the access.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> ---
>  hw/ppc/spapr_hcall.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
>  include/hw/ppc/spapr.h   |  3 +-
>  target/ppc/mmu-radix64.c |  7 ++---
>  target/ppc/mmu-radix64.h |  4 +++
>  4 files changed, 83 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index a84d5e2163..a370d70500 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -17,6 +17,7 @@
>  #include "mmu-book3s-v3.h"
>  #include "hw/mem/memory-device.h"
>  #include "hw/ppc/ppc.h"
> +#include "mmu-radix64.h"
>  
>  static bool has_spr(PowerPCCPU *cpu, int spr)
>  {
> @@ -2158,6 +2159,78 @@ static target_ulong h_nested_tlb_invalidate(PowerPCCPU *cpu,
>      return H_SUCCESS;
>  }
>  
> +static target_ulong h_copy_tofrom_guest(PowerPCCPU *cpu,
> +                                        SpaprMachineState *spapr,
> +                                        target_ulong opcode, target_ulong *args)
> +{
> +    target_ulong lpid = args[0];
> +    target_ulong pid = args[1];
> +    vaddr eaddr = args[2];
> +    target_ulong gp_to = args[3];
> +    target_ulong gp_from = args[4];
> +    target_ulong n = args[5];
> +    int is_load = !!gp_to;

Looks like this should be a bool.

> +    void *from, *to;
> +    int prot, psize;
> +    hwaddr raddr, to_len, from_len;
> +
> +    if (spapr_get_cap(spapr, SPAPR_CAP_NESTED_KVM_HV) == 0) {
> +        return H_FUNCTION;
> +    }
> +
> +    if ((gp_to && gp_from) || (!gp_to && !gp_from)) {
> +        return H_PARAMETER;
> +    }
> +
> +    if (eaddr & (0xFFFUL << 52)) {
> +        return H_PARAMETER;
> +    }
> +
> +    if (!lpid) {
> +        return H_PARAMETER;
> +    }
> +
> +    /* Translate eaddr to raddr */
> +    if (ppc_radix64_xlate(cpu, eaddr, is_load, lpid, pid, 1, &raddr, &psize,

Don't we need some validation that the guest is in radix mode?

> +                          &prot, 0)) {
> +        return H_NOT_FOUND;
> +    }
> +    if (((raddr & ((1UL << psize) - 1)) + n) >= (1UL << psize)) {
> +        return H_PARAMETER;
> +    }
> +
> +    if (is_load) {
> +        gp_from = raddr;
> +    } else {
> +        gp_to = raddr;
> +    }
> +
> +    /* Map the memory regions and perform a memory copy */
> +    from = cpu_physical_memory_map(gp_from, &from_len, 0);
> +    if (!from) {
> +        return H_NOT_FOUND;
> +    }
> +    if (from_len < n) {
> +        cpu_physical_memory_unmap(from, from_len, 0, 0);
> +        return H_PARAMETER;
> +    }
> +    to = cpu_physical_memory_map(gp_to, &to_len, 1);
> +    if (!to) {
> +        cpu_physical_memory_unmap(from, from_len, 0, 0);
> +        return H_PARAMETER;
> +    }
> +    if (to_len < n) {
> +        cpu_physical_memory_unmap(from, from_len, 0, 0);
> +        cpu_physical_memory_unmap(to, to_len, 1, 0);
> +        return H_PARAMETER;
> +    }
> +    memcpy(to, from, n);
> +    cpu_physical_memory_unmap(from, from_len, 0, n);
> +    cpu_physical_memory_unmap(to, to_len, 1, n);
> +
> +    return H_SUCCESS;
> +}
> +
>  static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1];
>  static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1];
>  
> @@ -2268,6 +2341,7 @@ static void hypercall_register_types(void)
>      spapr_register_hypercall(H_SET_PARTITION_TABLE, h_set_partition_table);
>      spapr_register_hypercall(H_ENTER_NESTED, h_enter_nested);
>      spapr_register_hypercall(H_TLB_INVALIDATE, h_nested_tlb_invalidate);
> +    spapr_register_hypercall(H_COPY_TOFROM_GUEST, h_copy_tofrom_guest);
>  
>      /* Virtual Processor Home Node */
>      spapr_register_hypercall(H_HOME_NODE_ASSOCIATIVITY,
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 6a614c445f..d62f4108d4 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -505,7 +505,8 @@ struct SpaprMachineState {
>  #define H_SET_PARTITION_TABLE   0xF800
>  #define H_ENTER_NESTED          0xF804
>  #define H_TLB_INVALIDATE        0xF808
> -#define KVMPPC_HCALL_MAX        H_TLB_INVALIDATE
> +#define H_COPY_TOFROM_GUEST     0xF80C
> +#define KVMPPC_HCALL_MAX        H_COPY_TOFROM_GUEST
>  
>  typedef struct SpaprDeviceTreeUpdateHeader {
>      uint32_t version_id;
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 6118ad1b00..2a8147fc38 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -429,10 +429,9 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>      return true;
>  }
>  
> -static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> -                             uint64_t lpid, uint64_t pid, bool relocation,
> -                             hwaddr *raddr, int *psizep, int *protp,
> -                             bool cause_excp)
> +int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx, uint64_t lpid,
> +                      uint64_t pid, bool relocation, hwaddr *raddr, int *psizep,
> +                      int *protp, bool cause_excp)
>  {
>      CPUPPCState *env = &cpu->env;
>      ppc_v3_pate_t pate;
> diff --git a/target/ppc/mmu-radix64.h b/target/ppc/mmu-radix64.h
> index 96228546aa..c0bbd5c332 100644
> --- a/target/ppc/mmu-radix64.h
> +++ b/target/ppc/mmu-radix64.h
> @@ -66,6 +66,10 @@ static inline int ppc_radix64_get_prot_amr(PowerPCCPU *cpu)
>             (iamr & 0x1 ? 0 : PAGE_EXEC);
>  }
>  
> +int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx, uint64_t lpid,
> +                      uint64_t pid, bool relocation, hwaddr *raddr, int *psizep,
> +                      int *protp, bool cause_excp);
> +
>  #endif /* TARGET_PPC64 */
>  
>  #endif /* CONFIG_USER_ONLY */

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 12/13] target/ppc: Introduce POWER9 DD2.2 cpu type
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  6:32   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  6:32 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 2511 bytes --]

On Fri, May 03, 2019 at 03:53:15PM +1000, Suraj Jitindar Singh wrote:
> Introduce a POWER9 DD2.2 cpu type with pvr 0x004E1202.
> 
> A DD2.2 POWER9 cpu type is needed to enable kvm for pseries tcg guests
> since it means they will use the H_ENTER_NESTED hcall to run a guest
> rather than trying the generic entry path which will fail.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/ppc/spapr_cpu_core.c | 1 +
>  target/ppc/cpu-models.c | 2 ++
>  target/ppc/cpu-models.h | 1 +
>  3 files changed, 4 insertions(+)
> 
> diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
> index 40e7010cf0..98d46c6edb 100644
> --- a/hw/ppc/spapr_cpu_core.c
> +++ b/hw/ppc/spapr_cpu_core.c
> @@ -399,6 +399,7 @@ static const TypeInfo spapr_cpu_core_type_infos[] = {
>      DEFINE_SPAPR_CPU_CORE_TYPE("power8nvl_v1.0"),
>      DEFINE_SPAPR_CPU_CORE_TYPE("power9_v1.0"),
>      DEFINE_SPAPR_CPU_CORE_TYPE("power9_v2.0"),
> +    DEFINE_SPAPR_CPU_CORE_TYPE("power9_v2.2"),
>  #ifdef CONFIG_KVM
>      DEFINE_SPAPR_CPU_CORE_TYPE("host"),
>  #endif
> diff --git a/target/ppc/cpu-models.c b/target/ppc/cpu-models.c
> index 7c75963e3c..603ae7f5b4 100644
> --- a/target/ppc/cpu-models.c
> +++ b/target/ppc/cpu-models.c
> @@ -773,6 +773,8 @@
>                  "POWER9 v1.0")
>      POWERPC_DEF("power9_v2.0",   CPU_POWERPC_POWER9_DD20,            POWER9,
>                  "POWER9 v2.0")
> +    POWERPC_DEF("power9_v2.2",   CPU_POWERPC_POWER9_DD22,            POWER9,
> +                "POWER9 v2.2")
>  #endif /* defined (TARGET_PPC64) */
>  
>  /***************************************************************************/
> diff --git a/target/ppc/cpu-models.h b/target/ppc/cpu-models.h
> index efdb2fa53c..820e94b0c8 100644
> --- a/target/ppc/cpu-models.h
> +++ b/target/ppc/cpu-models.h
> @@ -373,6 +373,7 @@ enum {
>      CPU_POWERPC_POWER9_BASE        = 0x004E0000,
>      CPU_POWERPC_POWER9_DD1         = 0x004E0100,
>      CPU_POWERPC_POWER9_DD20        = 0x004E1200,
> +    CPU_POWERPC_POWER9_DD22        = 0x004E1202,
>      CPU_POWERPC_970_v22            = 0x00390202,
>      CPU_POWERPC_970FX_v10          = 0x00391100,
>      CPU_POWERPC_970FX_v20          = 0x003C0200,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Qemu-devel] [QEMU-PPC] [PATCH 13/13] target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg
  2019-05-03  5:53   ` Suraj Jitindar Singh
  (?)
@ 2019-05-10  6:34   ` David Gibson
  -1 siblings, 0 replies; 47+ messages in thread
From: David Gibson @ 2019-05-10  6:34 UTC (permalink / raw)
  To: Suraj Jitindar Singh; +Cc: groug, qemu-ppc, qemu-devel, clg

[-- Attachment #1: Type: text/plain, Size: 3732 bytes --]

On Fri, May 03, 2019 at 03:53:16PM +1000, Suraj Jitindar Singh wrote:
> It is now possible to use nested kvm-hv under tcg, thus allow for it to
> be enabled.
> 
> Note that nested kvm-hv requires that rc updates to ptes be done by
> software, otherwise the page tables get out of sync. So disable hardware
> rc updates when nested kvm-hv is enabled.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> ---
>  hw/ppc/spapr_caps.c      | 22 ++++++++++++++++++----
>  target/ppc/cpu.h         |  1 +
>  target/ppc/mmu-radix64.c |  4 ++--
>  3 files changed, 21 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c
> index 3278c09b0f..7fe07d83dd 100644
> --- a/hw/ppc/spapr_caps.c
> +++ b/hw/ppc/spapr_caps.c
> @@ -389,10 +389,7 @@ static void cap_nested_kvm_hv_apply(SpaprMachineState *spapr,
>          return;
>      }
>  
> -    if (tcg_enabled()) {
> -        error_setg(errp,
> -                   "No Nested KVM-HV support in tcg, try cap-nested-hv=off");
> -    } else if (kvm_enabled()) {
> +    if (kvm_enabled()) {
>          if (!kvmppc_has_cap_nested_kvm_hv()) {
>              error_setg(errp,
>  "KVM implementation does not support Nested KVM-HV, try cap-nested-hv=off");
> @@ -400,6 +397,22 @@ static void cap_nested_kvm_hv_apply(SpaprMachineState *spapr,
>                  error_setg(errp,
>  "Error enabling cap-nested-hv with KVM, try cap-nested-hv=off");
>          }
> +    } /* else { nothing required for tcg } */
> +}
> +
> +static void cap_nested_kvm_hv_cpu_apply(SpaprMachineState *spapr,
> +                                        PowerPCCPU *cpu,
> +                                        uint8_t val, Error **errp)
> +{
> +    CPUPPCState *env = &cpu->env;
> +
> +    if (tcg_enabled() && val) {
> +        if (env->spr[SPR_PVR] != 0x004E1202) {

Hrm.  Something other than an explicit PVR check would be nice (or
we'll have to keep hacking this when DD2.3 arrives).

> +            error_setg(errp, "Nested KVM-HV only supported on POWER9 DD2.2, "
> +                             "try cap-nested-hv=off or -cpu power9_v2.2");
> +            return;
> +        }
> +        env->disable_hw_rc_updates = true;
>      }
>  }
>  
> @@ -544,6 +557,7 @@ SpaprCapabilityInfo capability_table[SPAPR_CAP_NUM] = {
>          .set = spapr_cap_set_bool,
>          .type = "bool",
>          .apply = cap_nested_kvm_hv_apply,
> +        .cpu_apply = cap_nested_kvm_hv_cpu_apply,
>      },
>      [SPAPR_CAP_LARGE_DECREMENTER] = {
>          .name = "large-decr",
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 426015c9cd..6502e0de82 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1237,6 +1237,7 @@ struct CPUPPCState {
>      target_ulong hv_ptr, regs_ptr;
>      struct hv_guest_state l2_hv, l1_saved_hv;
>      struct pt_regs l2_regs, l1_saved_regs;
> +    bool disable_hw_rc_updates;
>  };
>  
>  #define SET_FIT_PERIOD(a_, b_, c_, d_)          \
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 2a8147fc38..cc06967dbe 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -31,9 +31,9 @@
>  static inline bool ppc_radix64_hw_rc_updates(CPUPPCState *env)
>  {
>  #ifdef CONFIG_ATOMIC64
> -    return true;
> +    return !env->disable_hw_rc_updates;
>  #else
> -    return !qemu_tcg_mttcg_enabled();
> +    return !qemu_tcg_mttcg_enabled() && !env->disable_hw_rc_updates;
>  #endif
>  }
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2019-05-10  6:38 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-03  5:53 [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG Suraj Jitindar Singh
2019-05-03  5:53 ` Suraj Jitindar Singh
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 01/13] target/ppc: Implement the VTB for HV access Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-06  6:02   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 02/13] target/ppc: Work [S]PURR implementation and add HV support Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-06  6:15   ` David Gibson
2019-05-07  1:28     ` Suraj Jitindar Singh
2019-05-09  6:45       ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 03/13] target/ppc: Add SPR ASDR Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-06  6:16   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 04/13] target/ppc: Add SPR TBU40 Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-06  6:17   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 05/13] target/ppc: Add privileged message send facilities Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  2:09   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 06/13] target/ppc: Enforce that the root page directory size must be at least 5 Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  2:11   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 07/13] target/ppc: Handle partition scoped radix tree translation Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  2:28   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 08/13] target/ppc: Implement hcall H_SET_PARTITION_TABLE Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  2:30   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 09/13] target/ppc: Implement hcall H_ENTER_NESTED Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  2:57   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 10/13] target/ppc: Implement hcall H_TLB_INVALIDATE Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  6:28   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 11/13] target/ppc: Implement hcall H_COPY_TOFROM_GUEST Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  6:32   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 12/13] target/ppc: Introduce POWER9 DD2.2 cpu type Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  6:32   ` David Gibson
2019-05-03  5:53 ` [Qemu-devel] [QEMU-PPC] [PATCH 13/13] target/ppc: Enable SPAPR_CAP_NESTED_KVM_HV under tcg Suraj Jitindar Singh
2019-05-03  5:53   ` Suraj Jitindar Singh
2019-05-10  6:34   ` David Gibson
2019-05-03  5:58 ` [Qemu-devel] [QEMU-PPC] [PATCH 00/13] target/ppc: Implement KVM support under TCG Suraj Jitindar Singh
2019-05-03  5:58   ` Suraj Jitindar Singh
2019-05-06  6:20 ` David Gibson
2019-05-06 23:45   ` Suraj Jitindar Singh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.