All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
@ 2016-01-27 10:13 David Gibson
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
                   ` (10 more replies)
  0 siblings, 11 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
involving control bits in both the SLB and HPTE.  At present we
support a few of the options, but far fewer than real hardware.

We're able to get away with that in practice, because guests use a
device tree property to determine which page sizes are available and
we are setting that to match.  However, the fact that the actual code
doesn't necessarily what we put into the table of available page sizes
is another ugliness.

This series makes a number of cleanups to the page size handling.  The
upshot is that afterwards the softmmu code operates off the same page
size encoding table that is advertised to the guests, ensuring that
they will be in sync.

Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
include the options allowed in hardware (including MPSS).  We can fix
other hash MMU based CPUs in future if anyone cares enough.

For a simple benchmark I timed fully booting then cleanly shutting
down a TCG guest (RHEL7.2 userspace with a recent upstream kernel).
Repeated 5 times on the current master branch, my current ppc-for-2.6
branch and this branch.  It looks like it improves speed, although the
difference is pretty much negligible:

master:		2m25 2m28 2m26 2m26 2m26
ppc-for-2.6:    2m26 2m25 2m26 2m27 2m25
this series:    2m20 2m23 2m23 2m25 2m21

Please review, and I'll fold into ppc-for-2.6 for my next pull.

Changes since v1:
  * Fix a couple of simple but serious bugs in logic
  * Did some rudimentary benchmarking
Changes since RFC:
  * Moved lookup of SLB encodings table from SLB lookup time to SLB
      store time

David Gibson (10):
  target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
  target-ppc: Convert mmu-hash{32,64}.[ch] from CPUPPCState to
    PowerPCCPU
  target-ppc: Rework ppc_store_slb
  target-ppc: Rework SLB page size lookup
  target-ppc: Use actual page size encodings from HPTE
  target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
  target-ppc: Helper to determine page size information from hpte alone
  target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG

 hw/ppc/spapr_hcall.c        | 102 ++++-------------
 target-ppc/cpu.h            |   1 +
 target-ppc/helper.h         |   1 +
 target-ppc/kvm.c            |   2 +-
 target-ppc/kvm_ppc.h        |   5 -
 target-ppc/machine.c        |  20 ++++
 target-ppc/mmu-hash32.c     |  68 ++++++-----
 target-ppc/mmu-hash32.h     |  30 ++---
 target-ppc/mmu-hash64.c     | 270 +++++++++++++++++++++++++++++++-------------
 target-ppc/mmu-hash64.h     |  30 +++--
 target-ppc/mmu_helper.c     |  59 ++++------
 target-ppc/translate.c      |   2 +-
 target-ppc/translate_init.c |  32 ++++++
 13 files changed, 372 insertions(+), 250 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-27 12:16   ` Thomas Huth
  2016-01-27 12:55   ` Laurent Vivier
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

This stub function is in the !KVM ifdef in target-ppc/kvm_ppc.h.  However
no such function exists on the KVM side, or is ever used.

I think this originally referenced a function which read host page size
information from /proc, for we we now use the KVM GET_SMMU_INFO extension
instead.

In any case, it has no function now, so remove it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/kvm_ppc.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
index 5e1333d..62406ce 100644
--- a/target-ppc/kvm_ppc.h
+++ b/target-ppc/kvm_ppc.h
@@ -98,11 +98,6 @@ static inline int kvmppc_get_hypercall(CPUPPCState *env, uint8_t *buf, int buf_l
     return -1;
 }
 
-static inline int kvmppc_read_segment_page_sizes(uint32_t *prop, int maxcells)
-{
-    return -1;
-}
-
 static inline int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level)
 {
     return -1;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-27 14:11   ` Laurent Vivier
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb David Gibson
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Like a lot of places these files include a mixture of functions taking
both the older CPUPPCState *env and newer PowerPCCPU *cpu.  Move a step
closer to cleaning this up by standardizing on PowerPCCPU, except for the
helper_* functions which are called with the CPUPPCState * from tcg.

Callers and some related functions are updated as well, the boundaries of
what's changed here are a bit arbitrary.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_hcall.c    | 31 ++++++++++---------
 target-ppc/kvm.c        |  2 +-
 target-ppc/mmu-hash32.c | 68 +++++++++++++++++++++++------------------
 target-ppc/mmu-hash32.h | 30 ++++++++++---------
 target-ppc/mmu-hash64.c | 80 +++++++++++++++++++++++++++++--------------------
 target-ppc/mmu-hash64.h | 21 ++++++-------
 target-ppc/mmu_helper.c | 13 ++++----
 7 files changed, 136 insertions(+), 109 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index c4ae255..4707196 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -160,7 +160,7 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
         pte_index &= ~7ULL;
         token = ppc_hash64_start_access(cpu, pte_index);
         for (; index < 8; index++) {
-            if ((ppc_hash64_load_hpte0(env, token, index) & HPTE64_V_VALID) == 0) {
+            if (!(ppc_hash64_load_hpte0(cpu, token, index) & HPTE64_V_VALID)) {
                 break;
             }
         }
@@ -170,14 +170,14 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
         }
     } else {
         token = ppc_hash64_start_access(cpu, pte_index);
-        if (ppc_hash64_load_hpte0(env, token, 0) & HPTE64_V_VALID) {
+        if (ppc_hash64_load_hpte0(cpu, token, 0) & HPTE64_V_VALID) {
             ppc_hash64_stop_access(token);
             return H_PTEG_FULL;
         }
         ppc_hash64_stop_access(token);
     }
 
-    ppc_hash64_store_hpte(env, pte_index + index,
+    ppc_hash64_store_hpte(cpu, pte_index + index,
                           pteh | HPTE64_V_HPTE_DIRTY, ptel);
 
     args[0] = pte_index + index;
@@ -191,11 +191,12 @@ typedef enum {
     REMOVE_HW = 3,
 } RemoveResult;
 
-static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
+static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
                                 target_ulong avpn,
                                 target_ulong flags,
                                 target_ulong *vp, target_ulong *rp)
 {
+    CPUPPCState *env = &cpu->env;
     uint64_t token;
     target_ulong v, r, rb;
 
@@ -203,9 +204,9 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
         return REMOVE_PARM;
     }
 
-    token = ppc_hash64_start_access(ppc_env_get_cpu(env), ptex);
-    v = ppc_hash64_load_hpte0(env, token, 0);
-    r = ppc_hash64_load_hpte1(env, token, 0);
+    token = ppc_hash64_start_access(cpu, ptex);
+    v = ppc_hash64_load_hpte0(cpu, token, 0);
+    r = ppc_hash64_load_hpte1(cpu, token, 0);
     ppc_hash64_stop_access(token);
 
     if ((v & HPTE64_V_VALID) == 0 ||
@@ -215,7 +216,7 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
     }
     *vp = v;
     *rp = r;
-    ppc_hash64_store_hpte(env, ptex, HPTE64_V_HPTE_DIRTY, 0);
+    ppc_hash64_store_hpte(cpu, ptex, HPTE64_V_HPTE_DIRTY, 0);
     rb = compute_tlbie_rb(v, r, ptex);
     ppc_tlb_invalidate_one(env, rb);
     return REMOVE_SUCCESS;
@@ -224,13 +225,12 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
 static target_ulong h_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
                              target_ulong opcode, target_ulong *args)
 {
-    CPUPPCState *env = &cpu->env;
     target_ulong flags = args[0];
     target_ulong pte_index = args[1];
     target_ulong avpn = args[2];
     RemoveResult ret;
 
-    ret = remove_hpte(env, pte_index, avpn, flags,
+    ret = remove_hpte(cpu, pte_index, avpn, flags,
                       &args[0], &args[1]);
 
     switch (ret) {
@@ -271,7 +271,6 @@ static target_ulong h_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
 static target_ulong h_bulk_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
                                   target_ulong opcode, target_ulong *args)
 {
-    CPUPPCState *env = &cpu->env;
     int i;
 
     for (i = 0; i < H_BULK_REMOVE_MAX_BATCH; i++) {
@@ -293,7 +292,7 @@ static target_ulong h_bulk_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
             return H_PARAMETER;
         }
 
-        ret = remove_hpte(env, *tsh & H_BULK_REMOVE_PTEX, tsl,
+        ret = remove_hpte(cpu, *tsh & H_BULK_REMOVE_PTEX, tsl,
                           (*tsh & H_BULK_REMOVE_FLAGS) >> 26,
                           &v, &r);
 
@@ -330,8 +329,8 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     }
 
     token = ppc_hash64_start_access(cpu, pte_index);
-    v = ppc_hash64_load_hpte0(env, token, 0);
-    r = ppc_hash64_load_hpte1(env, token, 0);
+    v = ppc_hash64_load_hpte0(cpu, token, 0);
+    r = ppc_hash64_load_hpte1(cpu, token, 0);
     ppc_hash64_stop_access(token);
 
     if ((v & HPTE64_V_VALID) == 0 ||
@@ -345,11 +344,11 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     r |= (flags << 48) & HPTE64_R_KEY_HI;
     r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
     rb = compute_tlbie_rb(v, r, pte_index);
-    ppc_hash64_store_hpte(env, pte_index,
+    ppc_hash64_store_hpte(cpu, pte_index,
                           (v & ~HPTE64_V_VALID) | HPTE64_V_HPTE_DIRTY, 0);
     ppc_tlb_invalidate_one(env, rb);
     /* Don't need a memory barrier, due to qemu's global lock */
-    ppc_hash64_store_hpte(env, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
+    ppc_hash64_store_hpte(cpu, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
     return H_SUCCESS;
 }
 
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 4524999..98d7ba6 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
              * Only restore valid entries
              */
             if (rb & SLB_ESID_V) {
-                ppc_store_slb(env, rb, rs);
+                ppc_store_slb(cpu, rb, rs);
             }
         }
 #endif
diff --git a/target-ppc/mmu-hash32.c b/target-ppc/mmu-hash32.c
index a00ae3c..4497480 100644
--- a/target-ppc/mmu-hash32.c
+++ b/target-ppc/mmu-hash32.c
@@ -83,9 +83,10 @@ static int ppc_hash32_pp_prot(int key, int pp, int nx)
     return prot;
 }
 
-static int ppc_hash32_pte_prot(CPUPPCState *env,
+static int ppc_hash32_pte_prot(PowerPCCPU *cpu,
                                target_ulong sr, ppc_hash_pte32_t pte)
 {
+    CPUPPCState *env = &cpu->env;
     unsigned pp, key;
 
     key = !!(msr_pr ? (sr & SR32_KP) : (sr & SR32_KS));
@@ -94,9 +95,11 @@ static int ppc_hash32_pte_prot(CPUPPCState *env,
     return ppc_hash32_pp_prot(key, pp, !!(sr & SR32_NX));
 }
 
-static target_ulong hash32_bat_size(CPUPPCState *env,
+static target_ulong hash32_bat_size(PowerPCCPU *cpu,
                                     target_ulong batu, target_ulong batl)
 {
+    CPUPPCState *env = &cpu->env;
+
     if ((msr_pr && !(batu & BATU32_VP))
         || (!msr_pr && !(batu & BATU32_VS))) {
         return 0;
@@ -105,7 +108,7 @@ static target_ulong hash32_bat_size(CPUPPCState *env,
     return BATU32_BEPI & ~((batu & BATU32_BL) << 15);
 }
 
-static int hash32_bat_prot(CPUPPCState *env,
+static int hash32_bat_prot(PowerPCCPU *cpu,
                            target_ulong batu, target_ulong batl)
 {
     int pp, prot;
@@ -121,7 +124,7 @@ static int hash32_bat_prot(CPUPPCState *env,
     return prot;
 }
 
-static target_ulong hash32_bat_601_size(CPUPPCState *env,
+static target_ulong hash32_bat_601_size(PowerPCCPU *cpu,
                                 target_ulong batu, target_ulong batl)
 {
     if (!(batl & BATL32_601_V)) {
@@ -131,9 +134,10 @@ static target_ulong hash32_bat_601_size(CPUPPCState *env,
     return BATU32_BEPI & ~((batl & BATL32_601_BL) << 17);
 }
 
-static int hash32_bat_601_prot(CPUPPCState *env,
+static int hash32_bat_601_prot(PowerPCCPU *cpu,
                                target_ulong batu, target_ulong batl)
 {
+    CPUPPCState *env = &cpu->env;
     int key, pp;
 
     pp = batu & BATU32_601_PP;
@@ -145,9 +149,10 @@ static int hash32_bat_601_prot(CPUPPCState *env,
     return ppc_hash32_pp_prot(key, pp, 0);
 }
 
-static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
+static hwaddr ppc_hash32_bat_lookup(PowerPCCPU *cpu, target_ulong ea, int rwx,
                                     int *prot)
 {
+    CPUPPCState *env = &cpu->env;
     target_ulong *BATlt, *BATut;
     int i;
 
@@ -166,9 +171,9 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
         target_ulong mask;
 
         if (unlikely(env->mmu_model == POWERPC_MMU_601)) {
-            mask = hash32_bat_601_size(env, batu, batl);
+            mask = hash32_bat_601_size(cpu, batu, batl);
         } else {
-            mask = hash32_bat_size(env, batu, batl);
+            mask = hash32_bat_size(cpu, batu, batl);
         }
         LOG_BATS("%s: %cBAT%d v " TARGET_FMT_lx " BATu " TARGET_FMT_lx
                  " BATl " TARGET_FMT_lx "\n", __func__,
@@ -178,9 +183,9 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
             hwaddr raddr = (batl & mask) | (ea & ~mask);
 
             if (unlikely(env->mmu_model == POWERPC_MMU_601)) {
-                *prot = hash32_bat_601_prot(env, batu, batl);
+                *prot = hash32_bat_601_prot(cpu, batu, batl);
             } else {
-                *prot = hash32_bat_prot(env, batu, batl);
+                *prot = hash32_bat_prot(cpu, batu, batl);
             }
 
             return raddr & TARGET_PAGE_MASK;
@@ -209,11 +214,12 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
     return -1;
 }
 
-static int ppc_hash32_direct_store(CPUPPCState *env, target_ulong sr,
+static int ppc_hash32_direct_store(PowerPCCPU *cpu, target_ulong sr,
                                    target_ulong eaddr, int rwx,
                                    hwaddr *raddr, int *prot)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
     int key = !!(msr_pr ? (sr & SR32_KP) : (sr & SR32_KS));
 
     qemu_log_mask(CPU_LOG_MMU, "direct store...\n");
@@ -293,12 +299,14 @@ static int ppc_hash32_direct_store(CPUPPCState *env, target_ulong sr,
     }
 }
 
-hwaddr get_pteg_offset32(CPUPPCState *env, hwaddr hash)
+hwaddr get_pteg_offset32(PowerPCCPU *cpu, hwaddr hash)
 {
+    CPUPPCState *env = &cpu->env;
+
     return (hash * HASH_PTEG_SIZE_32) & env->htab_mask;
 }
 
-static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
+static hwaddr ppc_hash32_pteg_search(PowerPCCPU *cpu, hwaddr pteg_off,
                                      bool secondary, target_ulong ptem,
                                      ppc_hash_pte32_t *pte)
 {
@@ -307,8 +315,8 @@ static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
     int i;
 
     for (i = 0; i < HPTES_PER_GROUP; i++) {
-        pte0 = ppc_hash32_load_hpte0(env, pte_offset);
-        pte1 = ppc_hash32_load_hpte1(env, pte_offset);
+        pte0 = ppc_hash32_load_hpte0(cpu, pte_offset);
+        pte1 = ppc_hash32_load_hpte1(cpu, pte_offset);
 
         if ((pte0 & HPTE32_V_VALID)
             && (secondary == !!(pte0 & HPTE32_V_SECONDARY))
@@ -324,10 +332,11 @@ static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
     return -1;
 }
 
-static hwaddr ppc_hash32_htab_lookup(CPUPPCState *env,
+static hwaddr ppc_hash32_htab_lookup(PowerPCCPU *cpu,
                                      target_ulong sr, target_ulong eaddr,
                                      ppc_hash_pte32_t *pte)
 {
+    CPUPPCState *env = &cpu->env;
     hwaddr pteg_off, pte_offset;
     hwaddr hash;
     uint32_t vsid, pgidx, ptem;
@@ -348,16 +357,16 @@ static hwaddr ppc_hash32_htab_lookup(CPUPPCState *env,
             " vsid=%" PRIx32 " ptem=%" PRIx32
             " hash=" TARGET_FMT_plx "\n",
             env->htab_base, env->htab_mask, vsid, ptem, hash);
-    pteg_off = get_pteg_offset32(env, hash);
-    pte_offset = ppc_hash32_pteg_search(env, pteg_off, 0, ptem, pte);
+    pteg_off = get_pteg_offset32(cpu, hash);
+    pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 0, ptem, pte);
     if (pte_offset == -1) {
         /* Secondary PTEG lookup */
         qemu_log_mask(CPU_LOG_MMU, "1 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
                 " vsid=%" PRIx32 " api=%" PRIx32
                 " hash=" TARGET_FMT_plx "\n", env->htab_base,
                 env->htab_mask, vsid, ptem, ~hash);
-        pteg_off = get_pteg_offset32(env, ~hash);
-        pte_offset = ppc_hash32_pteg_search(env, pteg_off, 1, ptem, pte);
+        pteg_off = get_pteg_offset32(cpu, ~hash);
+        pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 1, ptem, pte);
     }
 
     return pte_offset;
@@ -399,7 +408,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
 
     /* 2. Check Block Address Translation entries (BATs) */
     if (env->nb_BATs != 0) {
-        raddr = ppc_hash32_bat_lookup(env, eaddr, rwx, &prot);
+        raddr = ppc_hash32_bat_lookup(cpu, eaddr, rwx, &prot);
         if (raddr != -1) {
             if (need_prot[rwx] & ~prot) {
                 if (rwx == 2) {
@@ -430,7 +439,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
 
     /* 4. Handle direct store segments */
     if (sr & SR32_T) {
-        if (ppc_hash32_direct_store(env, sr, eaddr, rwx,
+        if (ppc_hash32_direct_store(cpu, sr, eaddr, rwx,
                                     &raddr, &prot) == 0) {
             tlb_set_page(cs, eaddr & TARGET_PAGE_MASK,
                          raddr & TARGET_PAGE_MASK, prot, mmu_idx,
@@ -449,7 +458,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
     }
 
     /* 6. Locate the PTE in the hash table */
-    pte_offset = ppc_hash32_htab_lookup(env, sr, eaddr, &pte);
+    pte_offset = ppc_hash32_htab_lookup(cpu, sr, eaddr, &pte);
     if (pte_offset == -1) {
         if (rwx == 2) {
             cs->exception_index = POWERPC_EXCP_ISI;
@@ -472,7 +481,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
 
     /* 7. Check access permissions */
 
-    prot = ppc_hash32_pte_prot(env, sr, pte);
+    prot = ppc_hash32_pte_prot(cpu, sr, pte);
 
     if (need_prot[rwx] & ~prot) {
         /* Access right violation */
@@ -507,7 +516,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
     }
 
     if (new_pte1 != pte.pte1) {
-        ppc_hash32_store_hpte1(env, pte_offset, new_pte1);
+        ppc_hash32_store_hpte1(cpu, pte_offset, new_pte1);
     }
 
     /* 9. Determine the real address from the PTE */
@@ -520,8 +529,9 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
     return 0;
 }
 
-hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
+hwaddr ppc_hash32_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
 {
+    CPUPPCState *env = &cpu->env;
     target_ulong sr;
     hwaddr pte_offset;
     ppc_hash_pte32_t pte;
@@ -533,7 +543,7 @@ hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
     }
 
     if (env->nb_BATs != 0) {
-        hwaddr raddr = ppc_hash32_bat_lookup(env, eaddr, 0, &prot);
+        hwaddr raddr = ppc_hash32_bat_lookup(cpu, eaddr, 0, &prot);
         if (raddr != -1) {
             return raddr;
         }
@@ -546,7 +556,7 @@ hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
         return -1;
     }
 
-    pte_offset = ppc_hash32_htab_lookup(env, sr, eaddr, &pte);
+    pte_offset = ppc_hash32_htab_lookup(cpu, sr, eaddr, &pte);
     if (pte_offset == -1) {
         return -1;
     }
diff --git a/target-ppc/mmu-hash32.h b/target-ppc/mmu-hash32.h
index d515d4f..afbb9dd 100644
--- a/target-ppc/mmu-hash32.h
+++ b/target-ppc/mmu-hash32.h
@@ -3,8 +3,8 @@
 
 #ifndef CONFIG_USER_ONLY
 
-hwaddr get_pteg_offset32(CPUPPCState *env, hwaddr hash);
-hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong addr);
+hwaddr get_pteg_offset32(PowerPCCPU *cpu, hwaddr hash);
+hwaddr ppc_hash32_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
 int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
 
@@ -65,40 +65,42 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
 #define HPTE32_R_WIMG           0x00000078
 #define HPTE32_R_PP             0x00000003
 
-static inline target_ulong ppc_hash32_load_hpte0(CPUPPCState *env,
+static inline target_ulong ppc_hash32_load_hpte0(PowerPCCPU *cpu,
                                                  hwaddr pte_offset)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    return ldl_phys(cs->as, env->htab_base + pte_offset);
+    return ldl_phys(CPU(cpu)->as, env->htab_base + pte_offset);
 }
 
-static inline target_ulong ppc_hash32_load_hpte1(CPUPPCState *env,
+static inline target_ulong ppc_hash32_load_hpte1(PowerPCCPU *cpu,
                                                  hwaddr pte_offset)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    return ldl_phys(cs->as, env->htab_base + pte_offset + HASH_PTE_SIZE_32/2);
+    return ldl_phys(CPU(cpu)->as,
+                    env->htab_base + pte_offset + HASH_PTE_SIZE_32 / 2);
 }
 
-static inline void ppc_hash32_store_hpte0(CPUPPCState *env,
+static inline void ppc_hash32_store_hpte0(PowerPCCPU *cpu,
                                           hwaddr pte_offset, target_ulong pte0)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    stl_phys(cs->as, env->htab_base + pte_offset, pte0);
+    stl_phys(CPU(cpu)->as, env->htab_base + pte_offset, pte0);
 }
 
-static inline void ppc_hash32_store_hpte1(CPUPPCState *env,
+static inline void ppc_hash32_store_hpte1(PowerPCCPU *cpu,
                                           hwaddr pte_offset, target_ulong pte1)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    stl_phys(cs->as, env->htab_base + pte_offset + HASH_PTE_SIZE_32/2, pte1);
+    stl_phys(CPU(cpu)->as,
+             env->htab_base + pte_offset + HASH_PTE_SIZE_32 / 2, pte1);
 }
 
 typedef struct {
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 34e20fa..03e25fd 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -40,8 +40,9 @@ bool kvmppc_kern_htab;
  * SLB handling
  */
 
-static ppc_slb_t *slb_lookup(CPUPPCState *env, target_ulong eaddr)
+static ppc_slb_t *slb_lookup(PowerPCCPU *cpu, target_ulong eaddr)
 {
+    CPUPPCState *env = &cpu->env;
     uint64_t esid_256M, esid_1T;
     int n;
 
@@ -69,12 +70,13 @@ static ppc_slb_t *slb_lookup(CPUPPCState *env, target_ulong eaddr)
     return NULL;
 }
 
-void dump_slb(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env)
+void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu)
 {
+    CPUPPCState *env = &cpu->env;
     int i;
     uint64_t slbe, slbv;
 
-    cpu_synchronize_state(CPU(ppc_env_get_cpu(env)));
+    cpu_synchronize_state(CPU(cpu));
 
     cpu_fprintf(f, "SLB\tESID\t\t\tVSID\n");
     for (i = 0; i < env->slb_nr; i++) {
@@ -117,7 +119,7 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
     PowerPCCPU *cpu = ppc_env_get_cpu(env);
     ppc_slb_t *slb;
 
-    slb = slb_lookup(env, addr);
+    slb = slb_lookup(cpu, addr);
     if (!slb) {
         return;
     }
@@ -133,8 +135,9 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
     }
 }
 
-int ppc_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
 {
+    CPUPPCState *env = &cpu->env;
     int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
@@ -159,9 +162,10 @@ int ppc_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
     return 0;
 }
 
-static int ppc_load_slb_esid(CPUPPCState *env, target_ulong rb,
+static int ppc_load_slb_esid(PowerPCCPU *cpu, target_ulong rb,
                              target_ulong *rt)
 {
+    CPUPPCState *env = &cpu->env;
     int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
@@ -173,9 +177,10 @@ static int ppc_load_slb_esid(CPUPPCState *env, target_ulong rb,
     return 0;
 }
 
-static int ppc_load_slb_vsid(CPUPPCState *env, target_ulong rb,
+static int ppc_load_slb_vsid(PowerPCCPU *cpu, target_ulong rb,
                              target_ulong *rt)
 {
+    CPUPPCState *env = &cpu->env;
     int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
@@ -189,7 +194,9 @@ static int ppc_load_slb_vsid(CPUPPCState *env, target_ulong rb,
 
 void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
 {
-    if (ppc_store_slb(env, rb, rs) < 0) {
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
+
+    if (ppc_store_slb(cpu, rb, rs) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
@@ -197,9 +204,10 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
 
 target_ulong helper_load_slb_esid(CPUPPCState *env, target_ulong rb)
 {
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
     target_ulong rt = 0;
 
-    if (ppc_load_slb_esid(env, rb, &rt) < 0) {
+    if (ppc_load_slb_esid(cpu, rb, &rt) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
@@ -208,9 +216,10 @@ target_ulong helper_load_slb_esid(CPUPPCState *env, target_ulong rb)
 
 target_ulong helper_load_slb_vsid(CPUPPCState *env, target_ulong rb)
 {
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
     target_ulong rt = 0;
 
-    if (ppc_load_slb_vsid(env, rb, &rt) < 0) {
+    if (ppc_load_slb_vsid(cpu, rb, &rt) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
@@ -221,9 +230,10 @@ target_ulong helper_load_slb_vsid(CPUPPCState *env, target_ulong rb)
  * 64-bit hash table MMU handling
  */
 
-static int ppc_hash64_pte_prot(CPUPPCState *env,
+static int ppc_hash64_pte_prot(PowerPCCPU *cpu,
                                ppc_slb_t *slb, ppc_hash_pte64_t pte)
 {
+    CPUPPCState *env = &cpu->env;
     unsigned pp, key;
     /* Some pp bit combinations have undefined behaviour, so default
      * to no access in those cases */
@@ -273,12 +283,12 @@ static int ppc_hash64_pte_prot(CPUPPCState *env,
     return prot;
 }
 
-static int ppc_hash64_amr_prot(CPUPPCState *env, ppc_hash_pte64_t pte)
+static int ppc_hash64_amr_prot(PowerPCCPU *cpu, ppc_hash_pte64_t pte)
 {
+    CPUPPCState *env = &cpu->env;
     int key, amrbits;
     int prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
 
-
     /* Only recent MMUs implement Virtual Page Class Key Protection */
     if (!(env->mmu_model & POWERPC_MMU_AMR)) {
         return prot;
@@ -347,23 +357,24 @@ void ppc_hash64_stop_access(uint64_t token)
     }
 }
 
-static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
+static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
                                      bool secondary, target_ulong ptem,
                                      ppc_hash_pte64_t *pte)
 {
+    CPUPPCState *env = &cpu->env;
     int i;
     uint64_t token;
     target_ulong pte0, pte1;
     target_ulong pte_index;
 
     pte_index = (hash & env->htab_mask) * HPTES_PER_GROUP;
-    token = ppc_hash64_start_access(ppc_env_get_cpu(env), pte_index);
+    token = ppc_hash64_start_access(cpu, pte_index);
     if (!token) {
         return -1;
     }
     for (i = 0; i < HPTES_PER_GROUP; i++) {
-        pte0 = ppc_hash64_load_hpte0(env, token, i);
-        pte1 = ppc_hash64_load_hpte1(env, token, i);
+        pte0 = ppc_hash64_load_hpte0(cpu, token, i);
+        pte1 = ppc_hash64_load_hpte1(cpu, token, i);
 
         if ((pte0 & HPTE64_V_VALID)
             && (secondary == !!(pte0 & HPTE64_V_SECONDARY))
@@ -399,10 +410,11 @@ static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
     return epnshift;
 }
 
-static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
+static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
                                      ppc_slb_t *slb, target_ulong eaddr,
                                      ppc_hash_pte64_t *pte)
 {
+    CPUPPCState *env = &cpu->env;
     hwaddr pte_offset;
     hwaddr hash;
     uint64_t vsid, epnshift, epnmask, epn, ptem;
@@ -435,7 +447,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
             " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
             " hash=" TARGET_FMT_plx "\n",
             env->htab_base, env->htab_mask, vsid, ptem,  hash);
-    pte_offset = ppc_hash64_pteg_search(env, hash, 0, ptem, pte);
+    pte_offset = ppc_hash64_pteg_search(cpu, hash, 0, ptem, pte);
 
     if (pte_offset == -1) {
         /* Secondary PTEG lookup */
@@ -445,7 +457,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
                 " hash=" TARGET_FMT_plx "\n", env->htab_base,
                 env->htab_mask, vsid, ptem, ~hash);
 
-        pte_offset = ppc_hash64_pteg_search(env, ~hash, 1, ptem, pte);
+        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, 1, ptem, pte);
     }
 
     return pte_offset;
@@ -492,7 +504,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     }
 
     /* 2. Translation is on, so look up the SLB */
-    slb = slb_lookup(env, eaddr);
+    slb = slb_lookup(cpu, eaddr);
 
     if (!slb) {
         if (rwx == 2) {
@@ -514,7 +526,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     }
 
     /* 4. Locate the PTE in the hash table */
-    pte_offset = ppc_hash64_htab_lookup(env, slb, eaddr, &pte);
+    pte_offset = ppc_hash64_htab_lookup(cpu, slb, eaddr, &pte);
     if (pte_offset == -1) {
         if (rwx == 2) {
             cs->exception_index = POWERPC_EXCP_ISI;
@@ -536,8 +548,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
 
     /* 5. Check access permissions */
 
-    pp_prot = ppc_hash64_pte_prot(env, slb, pte);
-    amr_prot = ppc_hash64_amr_prot(env, pte);
+    pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
+    amr_prot = ppc_hash64_amr_prot(cpu, pte);
     prot = pp_prot & amr_prot;
 
     if ((need_prot[rwx] & ~prot) != 0) {
@@ -580,7 +592,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     }
 
     if (new_pte1 != pte.pte1) {
-        ppc_hash64_store_hpte(env, pte_offset / HASH_PTE_SIZE_64,
+        ppc_hash64_store_hpte(cpu, pte_offset / HASH_PTE_SIZE_64,
                               pte.pte0, new_pte1);
     }
 
@@ -594,8 +606,9 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     return 0;
 }
 
-hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
+hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
 {
+    CPUPPCState *env = &cpu->env;
     ppc_slb_t *slb;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
@@ -605,12 +618,12 @@ hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
         return addr & 0x0FFFFFFFFFFFFFFFULL;
     }
 
-    slb = slb_lookup(env, addr);
+    slb = slb_lookup(cpu, addr);
     if (!slb) {
         return -1;
     }
 
-    pte_offset = ppc_hash64_htab_lookup(env, slb, addr, &pte);
+    pte_offset = ppc_hash64_htab_lookup(cpu, slb, addr, &pte);
     if (pte_offset == -1) {
         return -1;
     }
@@ -618,11 +631,11 @@ hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
     return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
 }
 
-void ppc_hash64_store_hpte(CPUPPCState *env,
+void ppc_hash64_store_hpte(PowerPCCPU *cpu,
                            target_ulong pte_index,
                            target_ulong pte0, target_ulong pte1)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     if (kvmppc_kern_htab) {
         kvmppc_hash64_write_pte(env, pte_index, pte0, pte1);
@@ -632,9 +645,10 @@ void ppc_hash64_store_hpte(CPUPPCState *env,
     pte_index *= HASH_PTE_SIZE_64;
     if (env->external_htab) {
         stq_p(env->external_htab + pte_index, pte0);
-        stq_p(env->external_htab + pte_index + HASH_PTE_SIZE_64/2, pte1);
+        stq_p(env->external_htab + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
     } else {
-        stq_phys(cs->as, env->htab_base + pte_index, pte0);
-        stq_phys(cs->as, env->htab_base + pte_index + HASH_PTE_SIZE_64/2, pte1);
+        stq_phys(CPU(cpu)->as, env->htab_base + pte_index, pte0);
+        stq_phys(CPU(cpu)->as,
+                 env->htab_base + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
     }
 }
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 291750f..6e3de7e 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -4,12 +4,13 @@
 #ifndef CONFIG_USER_ONLY
 
 #ifdef TARGET_PPC64
-void dump_slb(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env);
-int ppc_store_slb (CPUPPCState *env, target_ulong rb, target_ulong rs);
-hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr);
+void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
+void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
+hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
-void ppc_hash64_store_hpte(CPUPPCState *env, target_ulong index,
+void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
                            target_ulong pte0, target_ulong pte1);
 #endif
 
@@ -85,31 +86,31 @@ extern bool kvmppc_kern_htab;
 uint64_t ppc_hash64_start_access(PowerPCCPU *cpu, target_ulong pte_index);
 void ppc_hash64_stop_access(uint64_t token);
 
-static inline target_ulong ppc_hash64_load_hpte0(CPUPPCState *env,
+static inline target_ulong ppc_hash64_load_hpte0(PowerPCCPU *cpu,
                                                  uint64_t token, int index)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
     uint64_t addr;
 
     addr = token + (index * HASH_PTE_SIZE_64);
     if (env->external_htab) {
         return  ldq_p((const void *)(uintptr_t)addr);
     } else {
-        return ldq_phys(cs->as, addr);
+        return ldq_phys(CPU(cpu)->as, addr);
     }
 }
 
-static inline target_ulong ppc_hash64_load_hpte1(CPUPPCState *env,
+static inline target_ulong ppc_hash64_load_hpte1(PowerPCCPU *cpu,
                                                  uint64_t token, int index)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
     uint64_t addr;
 
     addr = token + (index * HASH_PTE_SIZE_64) + HASH_PTE_SIZE_64/2;
     if (env->external_htab) {
         return  ldq_p((const void *)(uintptr_t)addr);
     } else {
-        return ldq_phys(cs->as, addr);
+        return ldq_phys(CPU(cpu)->as, addr);
     }
 }
 
diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index 5217691..0ab73bc 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -1297,7 +1297,7 @@ void dump_mmu(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env)
     case POWERPC_MMU_2_06a:
     case POWERPC_MMU_2_07:
     case POWERPC_MMU_2_07a:
-        dump_slb(f, cpu_fprintf, env);
+        dump_slb(f, cpu_fprintf, ppc_env_get_cpu(env));
         break;
 #endif
     default:
@@ -1439,12 +1439,12 @@ hwaddr ppc_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
     case POWERPC_MMU_2_06a:
     case POWERPC_MMU_2_07:
     case POWERPC_MMU_2_07a:
-        return ppc_hash64_get_phys_page_debug(env, addr);
+        return ppc_hash64_get_phys_page_debug(cpu, addr);
 #endif
 
     case POWERPC_MMU_32B:
     case POWERPC_MMU_601:
-        return ppc_hash32_get_phys_page_debug(env, addr);
+        return ppc_hash32_get_phys_page_debug(cpu, addr);
 
     default:
         ;
@@ -1510,6 +1510,7 @@ static int cpu_ppc_handle_mmu_fault(CPUPPCState *env, target_ulong address,
                                     int rw, int mmu_idx)
 {
     CPUState *cs = CPU(ppc_env_get_cpu(env));
+    PowerPCCPU *cpu = POWERPC_CPU(cs);
     mmu_ctx_t ctx;
     int access_type;
     int ret = 0;
@@ -1611,9 +1612,9 @@ static int cpu_ppc_handle_mmu_fault(CPUPPCState *env, target_ulong address,
                 tlb_miss:
                     env->error_code |= ctx.key << 19;
                     env->spr[SPR_HASH1] = env->htab_base +
-                        get_pteg_offset32(env, ctx.hash[0]);
+                        get_pteg_offset32(cpu, ctx.hash[0]);
                     env->spr[SPR_HASH2] = env->htab_base +
-                        get_pteg_offset32(env, ctx.hash[1]);
+                        get_pteg_offset32(cpu, ctx.hash[1]);
                     break;
                 case POWERPC_MMU_SOFT_74xx:
                     if (rw == 1) {
@@ -2101,7 +2102,7 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
         /* flags = flags */
         rs |= ((value >> 27) & 0xf) << 8;
 
-        ppc_store_slb(env, rb, rs);
+        ppc_store_slb(cpu, rb, rs);
     } else
 #endif
     if (env->sr[srnum] != value) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-27 17:21   ` Laurent Vivier
  2016-01-28  4:16   ` Benjamin Herrenschmidt
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 04/10] target-ppc: Rework SLB page size lookup David Gibson
                   ` (7 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
Currently it takes two parameters, which contain values encoded as the
register arguments to the slbmte instruction, one register contains the
ESID portion of the SLBE and also the slot number, the other contains the
VSID portion of the SLBE.

We're shortly going to want to do some SLB updates from other code where
it is more convenient to supply the slot number and ESID separately, so
rework this function and its callers to work this way.

As a bonus, this slightly simplifies the emulation of segment registers for
when running a 32-bit OS on a 64-bit CPU.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/kvm.c        |  2 +-
 target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
 target-ppc/mmu-hash64.h |  3 ++-
 target-ppc/mmu_helper.c | 14 +++++---------
 4 files changed, 21 insertions(+), 22 deletions(-)

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 98d7ba6..0f45380 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
              * Only restore valid entries
              */
             if (rb & SLB_ESID_V) {
-                ppc_store_slb(cpu, rb, rs);
+                ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs);
             }
         }
 #endif
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 03e25fd..6e05643 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
     }
 }
 
-int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
+                  target_ulong esid, target_ulong vsid)
 {
     CPUPPCState *env = &cpu->env;
-    int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
-    if (rb & (0x1000 - env->slb_nr)) {
-        return -1; /* Reserved bits set or slot too high */
+    if (slot >= env->slb_nr) {
+        return -1; /* Bad slot number */
+    }
+    if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) {
+        return -1; /* Reserved bits set */
     }
-    if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
+    if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
         return -1; /* Bad segment size */
     }
-    if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
+    if ((vsid & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
         return -1; /* 1T segment on MMU that doesn't support it */
     }
 
-    /* Mask out the slot number as we store the entry */
-    slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V);
-    slb->vsid = rs;
+    slb->esid = esid;
+    slb->vsid = vsid;
 
     LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
-            " %016" PRIx64 "\n", __func__, slot, rb, rs,
+            " %016" PRIx64 "\n", __func__, slot, esid, vsid,
             slb->esid, slb->vsid);
 
     return 0;
@@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
 {
     PowerPCCPU *cpu = ppc_env_get_cpu(env);
 
-    if (ppc_store_slb(cpu, rb, rs) < 0) {
+    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 6e3de7e..24fd2c4 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -6,7 +6,8 @@
 #ifdef TARGET_PPC64
 void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
 void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
-int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
+                  target_ulong esid, target_ulong vsid);
 hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index 0ab73bc..c040b17 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -2088,21 +2088,17 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
             (int)srnum, value, env->sr[srnum]);
 #if defined(TARGET_PPC64)
     if (env->mmu_model & POWERPC_MMU_64) {
-        uint64_t rb = 0, rs = 0;
+        uint64_t esid, vsid;
 
         /* ESID = srnum */
-        rb |= ((uint32_t)srnum & 0xf) << 28;
-        /* Set the valid bit */
-        rb |= SLB_ESID_V;
-        /* Index = ESID */
-        rb |= (uint32_t)srnum;
+        esid = ((uint64_t)(srnum & 0xf) << 28) | SLB_ESID_V;
 
         /* VSID = VSID */
-        rs |= (value & 0xfffffff) << 12;
+        vsid = (value & 0xfffffff) << 12;
         /* flags = flags */
-        rs |= ((value >> 27) & 0xf) << 8;
+        vsid |= ((value >> 27) & 0xf) << 8;
 
-        ppc_store_slb(cpu, rb, rs);
+        ppc_store_slb(cpu, srnum, esid, vsid);
     } else
 #endif
     if (env->sr[srnum] != value) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 04/10] target-ppc: Rework SLB page size lookup
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (2 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-28  4:17   ` Benjamin Herrenschmidt
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Currently, the ppc_hash64_page_shift() function looks up a page size based
on information in an SLB entry.  It open codes the bit translation for
existing CPUs, however different CPU models can have different SLB
encodings.  We already store those in the 'sps' table in CPUPPCState, but
we don't currently enforce that that actually matches the logic in
ppc_hash64_page_shift.

This patch reworks lookup of page size from SLB in several ways:
  * ppc_store_slb() will now fail (triggering an illegal instruction
    exception) if given a bad SLB page size encoding
  * On success ppc_store_slb() stores a pointer to the relevant entry in
    the page size table in the SLB entry.  This is looked up directly from
    the published table of page size encodings, so can't get out ot sync.
  * ppc_hash64_htab_lookup() and others now use this precached page size
    information rather than decoding the SLB values
  * Now that callers have easy access to the page_shift,
    ppc_hash64_pte_raddr() amounts to just a deposit64(), so remove it and
    have the callers use deposit64() directly.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/cpu.h        |  1 +
 target-ppc/machine.c    | 20 +++++++++++++
 target-ppc/mmu-hash64.c | 74 +++++++++++++++++++++++--------------------------
 3 files changed, 56 insertions(+), 39 deletions(-)

diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
index 2bc96b4..0820390 100644
--- a/target-ppc/cpu.h
+++ b/target-ppc/cpu.h
@@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t;
 struct ppc_slb_t {
     uint64_t esid;
     uint64_t vsid;
+    const struct ppc_one_seg_page_size *sps;
 };
 
 #define MAX_SLB_ENTRIES         64
diff --git a/target-ppc/machine.c b/target-ppc/machine.c
index b61c060..ca62d3e 100644
--- a/target-ppc/machine.c
+++ b/target-ppc/machine.c
@@ -2,6 +2,7 @@
 #include "hw/boards.h"
 #include "sysemu/kvm.h"
 #include "helper_regs.h"
+#include "mmu-hash64.h"
 
 static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
 {
@@ -352,11 +353,30 @@ static bool slb_needed(void *opaque)
     return (cpu->env.mmu_model & POWERPC_MMU_64);
 }
 
+static int slb_post_load(void *opaque, int version_id)
+{
+    PowerPCCPU *cpu = opaque;
+    CPUPPCState *env = &cpu->env;
+    int i;
+
+    /* We've pulled in the raw esid and vsid values from the migration
+     * stream, but we need to recompute the page size pointers */
+    for (i = 0; i < env->slb_nr; i++) {
+        if (ppc_store_slb(cpu, i, env->slb[i].esid, env->slb[i].vsid) < 0) {
+            /* Migration source had bad values in its SLB */
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
 static const VMStateDescription vmstate_slb = {
     .name = "cpu/slb",
     .version_id = 1,
     .minimum_version_id = 1,
     .needed = slb_needed,
+    .post_load = slb_post_load,
     .fields = (VMStateField[]) {
         VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
         VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES),
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 6e05643..b784791 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -19,6 +19,7 @@
  */
 #include "cpu.h"
 #include "exec/helper-proto.h"
+#include "qemu/error-report.h"
 #include "sysemu/kvm.h"
 #include "kvm_ppc.h"
 #include "mmu-hash64.h"
@@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
 {
     CPUPPCState *env = &cpu->env;
     ppc_slb_t *slb = &env->slb[slot];
+    const struct ppc_one_seg_page_size *sps = NULL;
+    int i;
 
     if (slot >= env->slb_nr) {
         return -1; /* Bad slot number */
@@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
         return -1; /* 1T segment on MMU that doesn't support it */
     }
 
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const struct ppc_one_seg_page_size *sps1 = &env->sps.sps[i];
+
+        if (!sps1->page_shift) {
+            break;
+        }
+
+        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
+            sps = sps1;
+            break;
+        }
+    }
+
+    if (!sps) {
+        error_report("Bad page size encoding in SLB store: slot "TARGET_FMT_lu
+                     " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx,
+                     slot, esid, vsid);
+        return -1;
+    }
+
     slb->esid = esid;
     slb->vsid = vsid;
+    slb->sps = sps;
 
     LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
             " %016" PRIx64 "\n", __func__, slot, esid, vsid,
@@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
     return -1;
 }
 
-static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
-{
-    uint64_t epnshift;
-
-    /* Page size according to the SLB, which we use to generate the
-     * EPN for hash table lookup..  When we implement more recent MMU
-     * extensions this might be different from the actual page size
-     * encoded in the PTE */
-    if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_4K) {
-        epnshift = TARGET_PAGE_BITS;
-    } else if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_64K) {
-        epnshift = TARGET_PAGE_BITS_64K;
-    } else {
-        epnshift = TARGET_PAGE_BITS_16M;
-    }
-    return epnshift;
-}
-
 static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
                                      ppc_slb_t *slb, target_ulong eaddr,
                                      ppc_hash_pte64_t *pte)
@@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
     CPUPPCState *env = &cpu->env;
     hwaddr pte_offset;
     hwaddr hash;
-    uint64_t vsid, epnshift, epnmask, epn, ptem;
+    uint64_t vsid, epnmask, epn, ptem;
+
+    /* The SLB store path should prevent any bad page size encodings
+     * getting in there, so: */
+    assert(slb->sps);
 
-    epnshift = ppc_hash64_page_shift(slb);
-    epnmask = ~((1ULL << epnshift) - 1);
+    epnmask = ~((1ULL << slb->sps->page_shift) - 1);
 
     if (slb->vsid & SLB_VSID_B) {
         /* 1TB segment */
         vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T;
         epn = (eaddr & ~SEGMENT_MASK_1T) & epnmask;
-        hash = vsid ^ (vsid << 25) ^ (epn >> epnshift);
+        hash = vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift);
     } else {
         /* 256M segment */
         vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT;
         epn = (eaddr & ~SEGMENT_MASK_256M) & epnmask;
-        hash = vsid ^ (epn >> epnshift);
+        hash = vsid ^ (epn >> slb->sps->page_shift);
     }
     ptem = (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) & HPTE64_V_AVPN);
 
@@ -465,20 +474,6 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
     return pte_offset;
 }
 
-static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t pte,
-                                   target_ulong eaddr)
-{
-    hwaddr mask;
-    int target_page_bits;
-    hwaddr rpn = pte.pte1 & HPTE64_R_RPN;
-    /*
-     * We support 4K, 64K and 16M now
-     */
-    target_page_bits = ppc_hash64_page_shift(slb);
-    mask = (1ULL << target_page_bits) - 1;
-    return (rpn & ~mask) | (eaddr & mask);
-}
-
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
                                 int rwx, int mmu_idx)
 {
@@ -600,7 +595,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
 
     /* 7. Determine the real address from the PTE */
 
-    raddr = ppc_hash64_pte_raddr(slb, pte, eaddr);
+    raddr = deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps->page_shift, eaddr);
 
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
                  prot, mmu_idx, TARGET_PAGE_SIZE);
@@ -630,7 +625,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
         return -1;
     }
 
-    return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
+    return deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps->page_shift, addr)
+        & TARGET_PAGE_MASK;
 }
 
 void ppc_hash64_store_hpte(PowerPCCPU *cpu,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 05/10] target-ppc: Use actual page size encodings from HPTE
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (3 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 04/10] target-ppc: Rework SLB page size lookup David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-28  4:18   ` Benjamin Herrenschmidt
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

At present the 64-bit hash MMU code uses information from the SLB to
determine the page size of a translation.  We do need that information to
correctly look up the hash table.  However the MMU also allows a
possibly larger page size to be encoded into the HPTE itself, which is used
to populate the TLB.  At present qemu doesn't check that, and so doesn't
support the MPSS "Multiple Page Size per Segment" feature.

This makes a start on allowing this, by adding an hpte_page_shift()
function which looks up the page size of an HPTE.  We use this to validate
page sizes encodings on faults, and populate the qemu TLB with larger
page sizes when appropriate.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/mmu-hash64.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 3 deletions(-)

diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index b784791..ee1e8bf 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -21,6 +21,7 @@
 #include "exec/helper-proto.h"
 #include "qemu/error-report.h"
 #include "sysemu/kvm.h"
+#include "qemu/error-report.h"
 #include "kvm_ppc.h"
 #include "mmu-hash64.h"
 
@@ -474,12 +475,50 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
     return pte_offset;
 }
 
+static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
+    uint64_t pte0, uint64_t pte1)
+{
+    int i;
+
+    if (!(pte0 & HPTE64_V_LARGE)) {
+        if (sps->page_shift != 12) {
+            /* 4kiB page in a non 4kiB segment */
+            return 0;
+        }
+        /* Normal 4kiB page */
+        return 12;
+    }
+
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const struct ppc_one_page_size *ps = &sps->enc[i];
+        uint64_t mask;
+
+        if (!ps->page_shift) {
+            break;
+        }
+
+        if (ps->page_shift == 12) {
+            /* L bit is set so this can't be a 4kiB page */
+            continue;
+        }
+
+        mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN;
+
+        if ((pte1 & mask) == (ps->pte_enc << HPTE64_R_RPN_SHIFT)) {
+            return ps->page_shift;
+        }
+    }
+
+    return 0; /* Bad page size encoding */
+}
+
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
                                 int rwx, int mmu_idx)
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
     ppc_slb_t *slb;
+    unsigned apshift;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
     int pp_prot, amr_prot, prot;
@@ -543,6 +582,18 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     qemu_log_mask(CPU_LOG_MMU,
                 "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset);
 
+    /* Validate page size encoding */
+    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
+    if (!apshift) {
+        error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64
+                     " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset);
+        /* Not entirely sure what the right action here, but machine
+         * check seems reasonable */
+        cs->exception_index = POWERPC_EXCP_MCHECK;
+        env->error_code = 0;
+        return 1;
+    }
+
     /* 5. Check access permissions */
 
     pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
@@ -595,10 +646,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
 
     /* 7. Determine the real address from the PTE */
 
-    raddr = deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps->page_shift, eaddr);
+    raddr = deposit64(pte.pte1 & HPTE64_R_RPN, 0, apshift, eaddr);
 
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
-                 prot, mmu_idx, TARGET_PAGE_SIZE);
+                 prot, mmu_idx, 1ULL << apshift);
 
     return 0;
 }
@@ -609,6 +660,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
     ppc_slb_t *slb;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
+    unsigned apshift;
 
     if (msr_dr == 0) {
         /* In real mode the top 4 effective address bits are ignored */
@@ -625,7 +677,12 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
         return -1;
     }
 
-    return deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps->page_shift, addr)
+    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
+    if (!apshift) {
+        return -1;
+    }
+
+    return deposit64(pte.pte1 & HPTE64_R_RPN, 0, apshift, addr)
         & TARGET_PAGE_MASK;
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (4 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-27 18:06   ` Laurent Vivier
                     ` (2 more replies)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
                   ` (4 subsequent siblings)
  10 siblings, 3 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

ppc_tlb_invalidate_one() has a big switch handling many different MMU
types.  However, most of those branches can never be reached:

It is called from 3 places: from remove_hpte() and h_protect() in
spapr_hcall.c (which always has a 64-bit hash MMU type), and from
helper_tlbie() in mmu_helper.c.

Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
used only on 440 and 460 models with the BookE mmu model.

These means the exhaustive list of MMU types which may call
ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.

Clean up by removing logic for all other MMU types from
ppc_tlb_invalidate_one().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/mmu_helper.c | 20 ++------------------
 1 file changed, 2 insertions(+), 18 deletions(-)

diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index c040b17..82ebe5d 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -1971,25 +1971,10 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
             ppc6xx_tlb_invalidate_virt(env, addr, 1);
         }
         break;
-    case POWERPC_MMU_SOFT_4xx:
-    case POWERPC_MMU_SOFT_4xx_Z:
-        ppc4xx_tlb_invalidate_virt(env, addr, env->spr[SPR_40x_PID]);
-        break;
-    case POWERPC_MMU_REAL:
-        cpu_abort(CPU(cpu), "No TLB for PowerPC 4xx in real mode\n");
-        break;
-    case POWERPC_MMU_MPC8xx:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "MPC8xx MMU model is not implemented\n");
-        break;
     case POWERPC_MMU_BOOKE:
         /* XXX: TODO */
         cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
         break;
-    case POWERPC_MMU_BOOKE206:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "BookE 2.06 MMU model is not implemented\n");
-        break;
     case POWERPC_MMU_32B:
     case POWERPC_MMU_601:
         /* tlbie invalidate TLBs for all segments */
@@ -2031,9 +2016,8 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
         break;
 #endif /* defined(TARGET_PPC64) */
     default:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "Unknown MMU model\n");
-        break;
+        /* Should never reach here with other MMU models */
+        assert(0);
     }
 #else
     ppc_tlb_invalidate_all(env);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (5 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-27 17:58   ` Laurent Vivier
  2016-01-28  4:20   ` Benjamin Herrenschmidt
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
                   ` (3 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Currently both the tlbiva instruction (used on 44x chips) and the tlbie
instruction (used on hash MMU chips) are both handled via
ppc_tlb_invalidate_one().  This is silly, because they're invoked from
different places, and do different things.

Clean this up by separating out the tlbiva instruction into its own
handling.  In fact the implementation is only a stub anyway.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/helper.h     |  1 +
 target-ppc/mmu_helper.c | 14 ++++++++++----
 target-ppc/translate.c  |  2 +-
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/target-ppc/helper.h b/target-ppc/helper.h
index 869be15..e5a8f7b 100644
--- a/target-ppc/helper.h
+++ b/target-ppc/helper.h
@@ -544,6 +544,7 @@ DEF_HELPER_2(74xx_tlbd, void, env, tl)
 DEF_HELPER_2(74xx_tlbi, void, env, tl)
 DEF_HELPER_FLAGS_1(tlbia, TCG_CALL_NO_RWG, void, env)
 DEF_HELPER_FLAGS_2(tlbie, TCG_CALL_NO_RWG, void, env, tl)
+DEF_HELPER_FLAGS_2(tlbiva, TCG_CALL_NO_RWG, void, env, tl)
 #if defined(TARGET_PPC64)
 DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
 DEF_HELPER_2(load_slb_esid, tl, env, tl)
diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index 82ebe5d..e9e0edb 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -1971,10 +1971,6 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
             ppc6xx_tlb_invalidate_virt(env, addr, 1);
         }
         break;
-    case POWERPC_MMU_BOOKE:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
-        break;
     case POWERPC_MMU_32B:
     case POWERPC_MMU_601:
         /* tlbie invalidate TLBs for all segments */
@@ -2116,6 +2112,16 @@ void helper_tlbie(CPUPPCState *env, target_ulong addr)
     ppc_tlb_invalidate_one(env, addr);
 }
 
+void helper_tlbiva(CPUPPCState *env, target_ulong addr)
+{
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
+
+    /* tlbiva instruciton only exists on BookE */
+    assert(env->mmu_model == POWERPC_MMU_BOOKE);
+    /* XXX: TODO */
+    cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
+}
+
 /* Software driven TLBs management */
 /* PowerPC 602/603 software TLB load instructions helpers */
 static void do_6xx_tlb(CPUPPCState *env, target_ulong new_EPN, int is_code)
diff --git a/target-ppc/translate.c b/target-ppc/translate.c
index 4be7eaa..a05a169 100644
--- a/target-ppc/translate.c
+++ b/target-ppc/translate.c
@@ -5904,7 +5904,7 @@ static void gen_tlbiva(DisasContext *ctx)
     }
     t0 = tcg_temp_new();
     gen_addr_reg_index(ctx, t0);
-    gen_helper_tlbie(cpu_env, cpu_gpr[rB(ctx->opcode)]);
+    gen_helper_tlbiva(cpu_env, cpu_gpr[rB(ctx->opcode)]);
     tcg_temp_free(t0);
 #endif
 }
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (6 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-28  4:33   ` Benjamin Herrenschmidt
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

When HPTEs are removed or modified by hypercalls on spapr, we need to
invalidate the relevant pages in the qemu TLB.

Currently we do that by doing some complicated calculations to work out the
right encoding for the tlbie instruction, then passing that to
ppc_tlb_invalidate_one()... which totally ignores the argument and flushes
the whole tlb.

Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_hcall.c    | 46 ++++------------------------------------------
 target-ppc/mmu-hash64.c | 12 ++++++++++++
 target-ppc/mmu-hash64.h |  3 +++
 3 files changed, 19 insertions(+), 42 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 4707196..dedc7e0 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -37,42 +37,6 @@ static void set_spr(CPUState *cs, int spr, target_ulong value,
     run_on_cpu(cs, do_spr_sync, &s);
 }
 
-static target_ulong compute_tlbie_rb(target_ulong v, target_ulong r,
-                                     target_ulong pte_index)
-{
-    target_ulong rb, va_low;
-
-    rb = (v & ~0x7fULL) << 16; /* AVA field */
-    va_low = pte_index >> 3;
-    if (v & HPTE64_V_SECONDARY) {
-        va_low = ~va_low;
-    }
-    /* xor vsid from AVA */
-    if (!(v & HPTE64_V_1TB_SEG)) {
-        va_low ^= v >> 12;
-    } else {
-        va_low ^= v >> 24;
-    }
-    va_low &= 0x7ff;
-    if (v & HPTE64_V_LARGE) {
-        rb |= 1;                         /* L field */
-#if 0 /* Disable that P7 specific bit for now */
-        if (r & 0xff000) {
-            /* non-16MB large page, must be 64k */
-            /* (masks depend on page size) */
-            rb |= 0x1000;                /* page encoding in LP field */
-            rb |= (va_low & 0x7f) << 16; /* 7b of VA in AVA/LP field */
-            rb |= (va_low & 0xfe);       /* AVAL field */
-        }
-#endif
-    } else {
-        /* 4kB page */
-        rb |= (va_low & 0x7ff) << 12;   /* remaining 11b of AVA */
-    }
-    rb |= (v >> 54) & 0x300;            /* B field */
-    return rb;
-}
-
 static inline bool valid_pte_index(CPUPPCState *env, target_ulong pte_index)
 {
     /*
@@ -198,7 +162,7 @@ static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
 {
     CPUPPCState *env = &cpu->env;
     uint64_t token;
-    target_ulong v, r, rb;
+    target_ulong v, r;
 
     if (!valid_pte_index(env, ptex)) {
         return REMOVE_PARM;
@@ -217,8 +181,7 @@ static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
     *vp = v;
     *rp = r;
     ppc_hash64_store_hpte(cpu, ptex, HPTE64_V_HPTE_DIRTY, 0);
-    rb = compute_tlbie_rb(v, r, ptex);
-    ppc_tlb_invalidate_one(env, rb);
+    ppc_hash64_tlb_flush_hpte(cpu, ptex, v, r);
     return REMOVE_SUCCESS;
 }
 
@@ -322,7 +285,7 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     target_ulong pte_index = args[1];
     target_ulong avpn = args[2];
     uint64_t token;
-    target_ulong v, r, rb;
+    target_ulong v, r;
 
     if (!valid_pte_index(env, pte_index)) {
         return H_PARAMETER;
@@ -343,10 +306,9 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     r |= (flags << 55) & HPTE64_R_PP0;
     r |= (flags << 48) & HPTE64_R_KEY_HI;
     r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
-    rb = compute_tlbie_rb(v, r, pte_index);
     ppc_hash64_store_hpte(cpu, pte_index,
                           (v & ~HPTE64_V_VALID) | HPTE64_V_HPTE_DIRTY, 0);
-    ppc_tlb_invalidate_one(env, rb);
+    ppc_hash64_tlb_flush_hpte(cpu, pte_index, v, r);
     /* Don't need a memory barrier, due to qemu's global lock */
     ppc_hash64_store_hpte(cpu, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
     return H_SUCCESS;
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index ee1e8bf..3284776 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -707,3 +707,15 @@ void ppc_hash64_store_hpte(PowerPCCPU *cpu,
                  env->htab_base + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
     }
 }
+
+void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
+                               target_ulong pte_index,
+                               target_ulong pte0, target_ulong pte1)
+{
+    /*
+     * XXX: given the fact that there are too many segments to
+     * invalidate, and we still don't have a tlb_flush_mask(env, n,
+     * mask) in QEMU, we just invalidate all TLBs
+     */
+    tlb_flush(CPU(cpu), 1);
+}
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 24fd2c4..293a951 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -13,6 +13,9 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
 void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
                            target_ulong pte0, target_ulong pte1);
+void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
+                               target_ulong pte_index,
+                               target_ulong pte0, target_ulong pte1);
 #endif
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 09/10] target-ppc: Helper to determine page size information from hpte alone
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (7 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-28  4:33   ` Benjamin Herrenschmidt
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
  2016-01-28 20:44 ` [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG Alexander Graf
  10 siblings, 1 reply; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

h_enter() in the spapr code needs to know the page size of the HPTE it's
about to insert.  Unlike other paths that do this, it doesn't have access
to the SLB, so at the moment it determines this with some open-coded
tests which assume POWER7 or POWER8 page size encodings.

To make this more flexible add ppc_hash64_hpte_page_shift_noslb() to
determine both the "base" page size per segment, and the individual
effective page size from an HPTE alone.

This means that the spapr code should now be able to handle any page size
listed in the env->sps table.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_hcall.c    | 25 ++++++-------------------
 target-ppc/mmu-hash64.c | 35 +++++++++++++++++++++++++++++++++++
 target-ppc/mmu-hash64.h |  3 +++
 3 files changed, 44 insertions(+), 19 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index dedc7e0..a535c73 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -72,31 +72,18 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     target_ulong pte_index = args[1];
     target_ulong pteh = args[2];
     target_ulong ptel = args[3];
-    target_ulong page_shift = 12;
+    unsigned apshift, spshift;
     target_ulong raddr;
     target_ulong index;
     uint64_t token;
 
-    /* only handle 4k and 16M pages for now */
-    if (pteh & HPTE64_V_LARGE) {
-#if 0 /* We don't support 64k pages yet */
-        if ((ptel & 0xf000) == 0x1000) {
-            /* 64k page */
-        } else
-#endif
-        if ((ptel & 0xff000) == 0) {
-            /* 16M page */
-            page_shift = 24;
-            /* lowest AVA bit must be 0 for 16M pages */
-            if (pteh & 0x80) {
-                return H_PARAMETER;
-            }
-        } else {
-            return H_PARAMETER;
-        }
+    apshift = ppc_hash64_hpte_page_shift_noslb(cpu, pteh, ptel, &spshift);
+    if (!apshift) {
+        /* Bad page size encoding */
+        return H_PARAMETER;
     }
 
-    raddr = (ptel & HPTE64_R_RPN) & ~((1ULL << page_shift) - 1);
+    raddr = (ptel & HPTE64_R_RPN) & ~((1ULL << apshift) - 1);
 
     if (is_ram_address(spapr, raddr)) {
         /* Regular RAM - should have WIMG=0010 */
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 3284776..19ee942 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -512,6 +512,41 @@ static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
     return 0; /* Bad page size encoding */
 }
 
+unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
+                                          uint64_t pte0, uint64_t pte1,
+                                          unsigned *seg_page_shift)
+{
+    CPUPPCState *env = &cpu->env;
+    int i;
+
+    if (!(pte0 & HPTE64_V_LARGE)) {
+        *seg_page_shift = 12;
+        return 12;
+    }
+
+    /*
+     * The encodings in env->sps need to be carefully chosen so that
+     * this gives an unambiguous result.
+     */
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const struct ppc_one_seg_page_size *sps = &env->sps.sps[i];
+        unsigned shift;
+
+        if (!sps->page_shift) {
+            break;
+        }
+
+        shift = hpte_page_shift(sps, pte0, pte1);
+        if (shift) {
+            *seg_page_shift = sps->page_shift;
+            return shift;
+        }
+    }
+
+    *seg_page_shift = 0;
+    return 0;
+}
+
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
                                 int rwx, int mmu_idx)
 {
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 293a951..34cf975 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -16,6 +16,9 @@ void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
 void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
                                target_ulong pte_index,
                                target_ulong pte0, target_ulong pte1);
+unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
+                                          uint64_t pte0, uint64_t pte1,
+                                          unsigned *seg_page_shift);
 #endif
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Qemu-devel] [PATCHv2 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (8 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
@ 2016-01-27 10:13 ` David Gibson
  2016-01-28  4:36   ` Benjamin Herrenschmidt
  2016-01-28 20:44 ` [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG Alexander Graf
  10 siblings, 1 reply; 33+ messages in thread
From: David Gibson @ 2016-01-27 10:13 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Now that the TCG and spapr code has been extended to allow (semi-)
arbitrary page encodings in the CPU's 'sps' table, we can add the many
page sizes supported by real POWER7 and POWER8 hardware that we previously
didn't support in TCG.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/mmu-hash64.h     |  2 ++
 target-ppc/translate_init.c | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 34cf975..ab0f86b 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -48,6 +48,8 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
 #define SLB_VSID_LLP_MASK       (SLB_VSID_L | SLB_VSID_LP)
 #define SLB_VSID_4K             0x0000000000000000ULL
 #define SLB_VSID_64K            0x0000000000000110ULL
+#define SLB_VSID_16M            0x0000000000000100ULL
+#define SLB_VSID_16G            0x0000000000000120ULL
 
 /*
  * Hash page table definitions
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index f6babd2..32b3679 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -8104,6 +8104,36 @@ static Property powerpc_servercpu_properties[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+#ifdef CONFIG_SOFTMMU
+static const struct ppc_segment_page_sizes POWER7_POWER8_sps = {
+    .sps = {
+        {
+            .page_shift = 12, /* 4K */
+            .slb_enc = 0,
+            .enc = { { .page_shift = 12, .pte_enc = 0 },
+                     { .page_shift = 16, .pte_enc = 0x7 },
+                     { .page_shift = 24, .pte_enc = 0x38 }, },
+        },
+        {
+            .page_shift = 16, /* 64K */
+            .slb_enc = SLB_VSID_64K,
+            .enc = { { .page_shift = 16, .pte_enc = 0x1 },
+                     { .page_shift = 24, .pte_enc = 0x8 }, },
+        },
+        {
+            .page_shift = 24, /* 16M */
+            .slb_enc = SLB_VSID_16M,
+            .enc = { { .page_shift = 24, .pte_enc = 0 }, },
+        },
+        {
+            .page_shift = 34, /* 16G */
+            .slb_enc = SLB_VSID_16G,
+            .enc = { { .page_shift = 34, .pte_enc = 0x3 }, },
+        },
+    }
+};
+#endif /* CONFIG_SOFTMMU */
+
 static void init_proc_POWER7 (CPUPPCState *env)
 {
     init_proc_book3s_64(env, BOOK3S_CPU_POWER7);
@@ -8167,6 +8197,7 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
     pcc->mmu_model = POWERPC_MMU_2_06;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
+    pcc->sps = &POWER7_POWER8_sps;
 #endif
     pcc->excp_model = POWERPC_EXCP_POWER7;
     pcc->bus_model = PPC_FLAGS_INPUT_POWER7;
@@ -8247,6 +8278,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
     pcc->mmu_model = POWERPC_MMU_2_07;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
+    pcc->sps = &POWER7_POWER8_sps;
 #endif
     pcc->excp_model = POWERPC_EXCP_POWER7;
     pcc->bus_model = PPC_FLAGS_INPUT_POWER7;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
@ 2016-01-27 12:16   ` Thomas Huth
  2016-01-27 12:55   ` Laurent Vivier
  1 sibling, 0 replies; 33+ messages in thread
From: Thomas Huth @ 2016-01-27 12:16 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, qemu-ppc, agraf, qemu-devel

On 27.01.2016 11:13, David Gibson wrote:
> This stub function is in the !KVM ifdef in target-ppc/kvm_ppc.h.  However
> no such function exists on the KVM side, or is ever used.
> 
> I think this originally referenced a function which read host page size
> information from /proc, for we we now use the KVM GET_SMMU_INFO extension
> instead.
> 
> In any case, it has no function now, so remove it.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/kvm_ppc.h | 5 -----
>  1 file changed, 5 deletions(-)
> 
> diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
> index 5e1333d..62406ce 100644
> --- a/target-ppc/kvm_ppc.h
> +++ b/target-ppc/kvm_ppc.h
> @@ -98,11 +98,6 @@ static inline int kvmppc_get_hypercall(CPUPPCState *env, uint8_t *buf, int buf_l
>      return -1;
>  }
>  
> -static inline int kvmppc_read_segment_page_sizes(uint32_t *prop, int maxcells)
> -{
> -    return -1;
> -}
> -
>  static inline int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level)
>  {
>      return -1;

Reviewed-by: Thomas Huth <thuth@redhat.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
  2016-01-27 12:16   ` Thomas Huth
@ 2016-01-27 12:55   ` Laurent Vivier
  1 sibling, 0 replies; 33+ messages in thread
From: Laurent Vivier @ 2016-01-27 12:55 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, thuth, qemu-ppc, agraf, qemu-devel



On 27/01/2016 11:13, David Gibson wrote:
> This stub function is in the !KVM ifdef in target-ppc/kvm_ppc.h.  However
> no such function exists on the KVM side, or is ever used.
> 
> I think this originally referenced a function which read host page size
> information from /proc, for we we now use the KVM GET_SMMU_INFO extension
> instead.
> 
> In any case, it has no function now, so remove it.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/kvm_ppc.h | 5 -----
>  1 file changed, 5 deletions(-)
> 
> diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
> index 5e1333d..62406ce 100644
> --- a/target-ppc/kvm_ppc.h
> +++ b/target-ppc/kvm_ppc.h
> @@ -98,11 +98,6 @@ static inline int kvmppc_get_hypercall(CPUPPCState *env, uint8_t *buf, int buf_l
>      return -1;
>  }
>  
> -static inline int kvmppc_read_segment_page_sizes(uint32_t *prop, int maxcells)
> -{
> -    return -1;
> -}
> -
>  static inline int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level)
>  {
>      return -1;
> 

Reviewed-by: Laurent Vivier <lvivier@redhat.com>

This function has been added by:

4656e1f ppc64: Rudimentary Support for extra page sizes on server CPUs

but has never been implemented

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
@ 2016-01-27 14:11   ` Laurent Vivier
  0 siblings, 0 replies; 33+ messages in thread
From: Laurent Vivier @ 2016-01-27 14:11 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, thuth, qemu-ppc, agraf, qemu-devel



On 27/01/2016 11:13, David Gibson wrote:
> Like a lot of places these files include a mixture of functions taking
> both the older CPUPPCState *env and newer PowerPCCPU *cpu.  Move a step
> closer to cleaning this up by standardizing on PowerPCCPU, except for the
> helper_* functions which are called with the CPUPPCState * from tcg.
> 
> Callers and some related functions are updated as well, the boundaries of
> what's changed here are a bit arbitrary.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Reviewed-by: Laurent Vivier <lvivier@redhat.com>

> ---
>  hw/ppc/spapr_hcall.c    | 31 ++++++++++---------
>  target-ppc/kvm.c        |  2 +-
>  target-ppc/mmu-hash32.c | 68 +++++++++++++++++++++++------------------
>  target-ppc/mmu-hash32.h | 30 ++++++++++---------
>  target-ppc/mmu-hash64.c | 80 +++++++++++++++++++++++++++++--------------------
>  target-ppc/mmu-hash64.h | 21 ++++++-------
>  target-ppc/mmu_helper.c | 13 ++++----
>  7 files changed, 136 insertions(+), 109 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index c4ae255..4707196 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -160,7 +160,7 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>          pte_index &= ~7ULL;
>          token = ppc_hash64_start_access(cpu, pte_index);
>          for (; index < 8; index++) {
> -            if ((ppc_hash64_load_hpte0(env, token, index) & HPTE64_V_VALID) == 0) {
> +            if (!(ppc_hash64_load_hpte0(cpu, token, index) & HPTE64_V_VALID)) {
>                  break;
>              }
>          }
> @@ -170,14 +170,14 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>          }
>      } else {
>          token = ppc_hash64_start_access(cpu, pte_index);
> -        if (ppc_hash64_load_hpte0(env, token, 0) & HPTE64_V_VALID) {
> +        if (ppc_hash64_load_hpte0(cpu, token, 0) & HPTE64_V_VALID) {
>              ppc_hash64_stop_access(token);
>              return H_PTEG_FULL;
>          }
>          ppc_hash64_stop_access(token);
>      }
>  
> -    ppc_hash64_store_hpte(env, pte_index + index,
> +    ppc_hash64_store_hpte(cpu, pte_index + index,
>                            pteh | HPTE64_V_HPTE_DIRTY, ptel);
>  
>      args[0] = pte_index + index;
> @@ -191,11 +191,12 @@ typedef enum {
>      REMOVE_HW = 3,
>  } RemoveResult;
>  
> -static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
> +static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
>                                  target_ulong avpn,
>                                  target_ulong flags,
>                                  target_ulong *vp, target_ulong *rp)
>  {
> +    CPUPPCState *env = &cpu->env;
>      uint64_t token;
>      target_ulong v, r, rb;
>  
> @@ -203,9 +204,9 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
>          return REMOVE_PARM;
>      }
>  
> -    token = ppc_hash64_start_access(ppc_env_get_cpu(env), ptex);
> -    v = ppc_hash64_load_hpte0(env, token, 0);
> -    r = ppc_hash64_load_hpte1(env, token, 0);
> +    token = ppc_hash64_start_access(cpu, ptex);
> +    v = ppc_hash64_load_hpte0(cpu, token, 0);
> +    r = ppc_hash64_load_hpte1(cpu, token, 0);
>      ppc_hash64_stop_access(token);
>  
>      if ((v & HPTE64_V_VALID) == 0 ||
> @@ -215,7 +216,7 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
>      }
>      *vp = v;
>      *rp = r;
> -    ppc_hash64_store_hpte(env, ptex, HPTE64_V_HPTE_DIRTY, 0);
> +    ppc_hash64_store_hpte(cpu, ptex, HPTE64_V_HPTE_DIRTY, 0);
>      rb = compute_tlbie_rb(v, r, ptex);
>      ppc_tlb_invalidate_one(env, rb);
>      return REMOVE_SUCCESS;
> @@ -224,13 +225,12 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
>  static target_ulong h_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>                               target_ulong opcode, target_ulong *args)
>  {
> -    CPUPPCState *env = &cpu->env;
>      target_ulong flags = args[0];
>      target_ulong pte_index = args[1];
>      target_ulong avpn = args[2];
>      RemoveResult ret;
>  
> -    ret = remove_hpte(env, pte_index, avpn, flags,
> +    ret = remove_hpte(cpu, pte_index, avpn, flags,
>                        &args[0], &args[1]);
>  
>      switch (ret) {
> @@ -271,7 +271,6 @@ static target_ulong h_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>  static target_ulong h_bulk_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>                                    target_ulong opcode, target_ulong *args)
>  {
> -    CPUPPCState *env = &cpu->env;
>      int i;
>  
>      for (i = 0; i < H_BULK_REMOVE_MAX_BATCH; i++) {
> @@ -293,7 +292,7 @@ static target_ulong h_bulk_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>              return H_PARAMETER;
>          }
>  
> -        ret = remove_hpte(env, *tsh & H_BULK_REMOVE_PTEX, tsl,
> +        ret = remove_hpte(cpu, *tsh & H_BULK_REMOVE_PTEX, tsl,
>                            (*tsh & H_BULK_REMOVE_FLAGS) >> 26,
>                            &v, &r);
>  
> @@ -330,8 +329,8 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>      }
>  
>      token = ppc_hash64_start_access(cpu, pte_index);
> -    v = ppc_hash64_load_hpte0(env, token, 0);
> -    r = ppc_hash64_load_hpte1(env, token, 0);
> +    v = ppc_hash64_load_hpte0(cpu, token, 0);
> +    r = ppc_hash64_load_hpte1(cpu, token, 0);
>      ppc_hash64_stop_access(token);
>  
>      if ((v & HPTE64_V_VALID) == 0 ||
> @@ -345,11 +344,11 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>      r |= (flags << 48) & HPTE64_R_KEY_HI;
>      r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
>      rb = compute_tlbie_rb(v, r, pte_index);
> -    ppc_hash64_store_hpte(env, pte_index,
> +    ppc_hash64_store_hpte(cpu, pte_index,
>                            (v & ~HPTE64_V_VALID) | HPTE64_V_HPTE_DIRTY, 0);
>      ppc_tlb_invalidate_one(env, rb);
>      /* Don't need a memory barrier, due to qemu's global lock */
> -    ppc_hash64_store_hpte(env, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
> +    ppc_hash64_store_hpte(cpu, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
>      return H_SUCCESS;
>  }
>  
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 4524999..98d7ba6 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
>               * Only restore valid entries
>               */
>              if (rb & SLB_ESID_V) {
> -                ppc_store_slb(env, rb, rs);
> +                ppc_store_slb(cpu, rb, rs);
>              }
>          }
>  #endif
> diff --git a/target-ppc/mmu-hash32.c b/target-ppc/mmu-hash32.c
> index a00ae3c..4497480 100644
> --- a/target-ppc/mmu-hash32.c
> +++ b/target-ppc/mmu-hash32.c
> @@ -83,9 +83,10 @@ static int ppc_hash32_pp_prot(int key, int pp, int nx)
>      return prot;
>  }
>  
> -static int ppc_hash32_pte_prot(CPUPPCState *env,
> +static int ppc_hash32_pte_prot(PowerPCCPU *cpu,
>                                 target_ulong sr, ppc_hash_pte32_t pte)
>  {
> +    CPUPPCState *env = &cpu->env;
>      unsigned pp, key;
>  
>      key = !!(msr_pr ? (sr & SR32_KP) : (sr & SR32_KS));
> @@ -94,9 +95,11 @@ static int ppc_hash32_pte_prot(CPUPPCState *env,
>      return ppc_hash32_pp_prot(key, pp, !!(sr & SR32_NX));
>  }
>  
> -static target_ulong hash32_bat_size(CPUPPCState *env,
> +static target_ulong hash32_bat_size(PowerPCCPU *cpu,
>                                      target_ulong batu, target_ulong batl)
>  {
> +    CPUPPCState *env = &cpu->env;
> +
>      if ((msr_pr && !(batu & BATU32_VP))
>          || (!msr_pr && !(batu & BATU32_VS))) {
>          return 0;
> @@ -105,7 +108,7 @@ static target_ulong hash32_bat_size(CPUPPCState *env,
>      return BATU32_BEPI & ~((batu & BATU32_BL) << 15);
>  }
>  
> -static int hash32_bat_prot(CPUPPCState *env,
> +static int hash32_bat_prot(PowerPCCPU *cpu,
>                             target_ulong batu, target_ulong batl)
>  {
>      int pp, prot;
> @@ -121,7 +124,7 @@ static int hash32_bat_prot(CPUPPCState *env,
>      return prot;
>  }
>  
> -static target_ulong hash32_bat_601_size(CPUPPCState *env,
> +static target_ulong hash32_bat_601_size(PowerPCCPU *cpu,
>                                  target_ulong batu, target_ulong batl)
>  {
>      if (!(batl & BATL32_601_V)) {
> @@ -131,9 +134,10 @@ static target_ulong hash32_bat_601_size(CPUPPCState *env,
>      return BATU32_BEPI & ~((batl & BATL32_601_BL) << 17);
>  }
>  
> -static int hash32_bat_601_prot(CPUPPCState *env,
> +static int hash32_bat_601_prot(PowerPCCPU *cpu,
>                                 target_ulong batu, target_ulong batl)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int key, pp;
>  
>      pp = batu & BATU32_601_PP;
> @@ -145,9 +149,10 @@ static int hash32_bat_601_prot(CPUPPCState *env,
>      return ppc_hash32_pp_prot(key, pp, 0);
>  }
>  
> -static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
> +static hwaddr ppc_hash32_bat_lookup(PowerPCCPU *cpu, target_ulong ea, int rwx,
>                                      int *prot)
>  {
> +    CPUPPCState *env = &cpu->env;
>      target_ulong *BATlt, *BATut;
>      int i;
>  
> @@ -166,9 +171,9 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
>          target_ulong mask;
>  
>          if (unlikely(env->mmu_model == POWERPC_MMU_601)) {
> -            mask = hash32_bat_601_size(env, batu, batl);
> +            mask = hash32_bat_601_size(cpu, batu, batl);
>          } else {
> -            mask = hash32_bat_size(env, batu, batl);
> +            mask = hash32_bat_size(cpu, batu, batl);
>          }
>          LOG_BATS("%s: %cBAT%d v " TARGET_FMT_lx " BATu " TARGET_FMT_lx
>                   " BATl " TARGET_FMT_lx "\n", __func__,
> @@ -178,9 +183,9 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
>              hwaddr raddr = (batl & mask) | (ea & ~mask);
>  
>              if (unlikely(env->mmu_model == POWERPC_MMU_601)) {
> -                *prot = hash32_bat_601_prot(env, batu, batl);
> +                *prot = hash32_bat_601_prot(cpu, batu, batl);
>              } else {
> -                *prot = hash32_bat_prot(env, batu, batl);
> +                *prot = hash32_bat_prot(cpu, batu, batl);
>              }
>  
>              return raddr & TARGET_PAGE_MASK;
> @@ -209,11 +214,12 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
>      return -1;
>  }
>  
> -static int ppc_hash32_direct_store(CPUPPCState *env, target_ulong sr,
> +static int ppc_hash32_direct_store(PowerPCCPU *cpu, target_ulong sr,
>                                     target_ulong eaddr, int rwx,
>                                     hwaddr *raddr, int *prot)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUState *cs = CPU(cpu);
> +    CPUPPCState *env = &cpu->env;
>      int key = !!(msr_pr ? (sr & SR32_KP) : (sr & SR32_KS));
>  
>      qemu_log_mask(CPU_LOG_MMU, "direct store...\n");
> @@ -293,12 +299,14 @@ static int ppc_hash32_direct_store(CPUPPCState *env, target_ulong sr,
>      }
>  }
>  
> -hwaddr get_pteg_offset32(CPUPPCState *env, hwaddr hash)
> +hwaddr get_pteg_offset32(PowerPCCPU *cpu, hwaddr hash)
>  {
> +    CPUPPCState *env = &cpu->env;
> +
>      return (hash * HASH_PTEG_SIZE_32) & env->htab_mask;
>  }
>  
> -static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
> +static hwaddr ppc_hash32_pteg_search(PowerPCCPU *cpu, hwaddr pteg_off,
>                                       bool secondary, target_ulong ptem,
>                                       ppc_hash_pte32_t *pte)
>  {
> @@ -307,8 +315,8 @@ static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
>      int i;
>  
>      for (i = 0; i < HPTES_PER_GROUP; i++) {
> -        pte0 = ppc_hash32_load_hpte0(env, pte_offset);
> -        pte1 = ppc_hash32_load_hpte1(env, pte_offset);
> +        pte0 = ppc_hash32_load_hpte0(cpu, pte_offset);
> +        pte1 = ppc_hash32_load_hpte1(cpu, pte_offset);
>  
>          if ((pte0 & HPTE32_V_VALID)
>              && (secondary == !!(pte0 & HPTE32_V_SECONDARY))
> @@ -324,10 +332,11 @@ static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
>      return -1;
>  }
>  
> -static hwaddr ppc_hash32_htab_lookup(CPUPPCState *env,
> +static hwaddr ppc_hash32_htab_lookup(PowerPCCPU *cpu,
>                                       target_ulong sr, target_ulong eaddr,
>                                       ppc_hash_pte32_t *pte)
>  {
> +    CPUPPCState *env = &cpu->env;
>      hwaddr pteg_off, pte_offset;
>      hwaddr hash;
>      uint32_t vsid, pgidx, ptem;
> @@ -348,16 +357,16 @@ static hwaddr ppc_hash32_htab_lookup(CPUPPCState *env,
>              " vsid=%" PRIx32 " ptem=%" PRIx32
>              " hash=" TARGET_FMT_plx "\n",
>              env->htab_base, env->htab_mask, vsid, ptem, hash);
> -    pteg_off = get_pteg_offset32(env, hash);
> -    pte_offset = ppc_hash32_pteg_search(env, pteg_off, 0, ptem, pte);
> +    pteg_off = get_pteg_offset32(cpu, hash);
> +    pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 0, ptem, pte);
>      if (pte_offset == -1) {
>          /* Secondary PTEG lookup */
>          qemu_log_mask(CPU_LOG_MMU, "1 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
>                  " vsid=%" PRIx32 " api=%" PRIx32
>                  " hash=" TARGET_FMT_plx "\n", env->htab_base,
>                  env->htab_mask, vsid, ptem, ~hash);
> -        pteg_off = get_pteg_offset32(env, ~hash);
> -        pte_offset = ppc_hash32_pteg_search(env, pteg_off, 1, ptem, pte);
> +        pteg_off = get_pteg_offset32(cpu, ~hash);
> +        pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 1, ptem, pte);
>      }
>  
>      return pte_offset;
> @@ -399,7 +408,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
>  
>      /* 2. Check Block Address Translation entries (BATs) */
>      if (env->nb_BATs != 0) {
> -        raddr = ppc_hash32_bat_lookup(env, eaddr, rwx, &prot);
> +        raddr = ppc_hash32_bat_lookup(cpu, eaddr, rwx, &prot);
>          if (raddr != -1) {
>              if (need_prot[rwx] & ~prot) {
>                  if (rwx == 2) {
> @@ -430,7 +439,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
>  
>      /* 4. Handle direct store segments */
>      if (sr & SR32_T) {
> -        if (ppc_hash32_direct_store(env, sr, eaddr, rwx,
> +        if (ppc_hash32_direct_store(cpu, sr, eaddr, rwx,
>                                      &raddr, &prot) == 0) {
>              tlb_set_page(cs, eaddr & TARGET_PAGE_MASK,
>                           raddr & TARGET_PAGE_MASK, prot, mmu_idx,
> @@ -449,7 +458,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
>      }
>  
>      /* 6. Locate the PTE in the hash table */
> -    pte_offset = ppc_hash32_htab_lookup(env, sr, eaddr, &pte);
> +    pte_offset = ppc_hash32_htab_lookup(cpu, sr, eaddr, &pte);
>      if (pte_offset == -1) {
>          if (rwx == 2) {
>              cs->exception_index = POWERPC_EXCP_ISI;
> @@ -472,7 +481,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
>  
>      /* 7. Check access permissions */
>  
> -    prot = ppc_hash32_pte_prot(env, sr, pte);
> +    prot = ppc_hash32_pte_prot(cpu, sr, pte);
>  
>      if (need_prot[rwx] & ~prot) {
>          /* Access right violation */
> @@ -507,7 +516,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
>      }
>  
>      if (new_pte1 != pte.pte1) {
> -        ppc_hash32_store_hpte1(env, pte_offset, new_pte1);
> +        ppc_hash32_store_hpte1(cpu, pte_offset, new_pte1);
>      }
>  
>      /* 9. Determine the real address from the PTE */
> @@ -520,8 +529,9 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
>      return 0;
>  }
>  
> -hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
> +hwaddr ppc_hash32_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>  {
> +    CPUPPCState *env = &cpu->env;
>      target_ulong sr;
>      hwaddr pte_offset;
>      ppc_hash_pte32_t pte;
> @@ -533,7 +543,7 @@ hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
>      }
>  
>      if (env->nb_BATs != 0) {
> -        hwaddr raddr = ppc_hash32_bat_lookup(env, eaddr, 0, &prot);
> +        hwaddr raddr = ppc_hash32_bat_lookup(cpu, eaddr, 0, &prot);
>          if (raddr != -1) {
>              return raddr;
>          }
> @@ -546,7 +556,7 @@ hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
>          return -1;
>      }
>  
> -    pte_offset = ppc_hash32_htab_lookup(env, sr, eaddr, &pte);
> +    pte_offset = ppc_hash32_htab_lookup(cpu, sr, eaddr, &pte);
>      if (pte_offset == -1) {
>          return -1;
>      }
> diff --git a/target-ppc/mmu-hash32.h b/target-ppc/mmu-hash32.h
> index d515d4f..afbb9dd 100644
> --- a/target-ppc/mmu-hash32.h
> +++ b/target-ppc/mmu-hash32.h
> @@ -3,8 +3,8 @@
>  
>  #ifndef CONFIG_USER_ONLY
>  
> -hwaddr get_pteg_offset32(CPUPPCState *env, hwaddr hash);
> -hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong addr);
> +hwaddr get_pteg_offset32(PowerPCCPU *cpu, hwaddr hash);
> +hwaddr ppc_hash32_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
>  int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
>                                  int mmu_idx);
>  
> @@ -65,40 +65,42 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
>  #define HPTE32_R_WIMG           0x00000078
>  #define HPTE32_R_PP             0x00000003
>  
> -static inline target_ulong ppc_hash32_load_hpte0(CPUPPCState *env,
> +static inline target_ulong ppc_hash32_load_hpte0(PowerPCCPU *cpu,
>                                                   hwaddr pte_offset)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>  
>      assert(!env->external_htab); /* Not supported on 32-bit for now */
> -    return ldl_phys(cs->as, env->htab_base + pte_offset);
> +    return ldl_phys(CPU(cpu)->as, env->htab_base + pte_offset);
>  }
>  
> -static inline target_ulong ppc_hash32_load_hpte1(CPUPPCState *env,
> +static inline target_ulong ppc_hash32_load_hpte1(PowerPCCPU *cpu,
>                                                   hwaddr pte_offset)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>  
>      assert(!env->external_htab); /* Not supported on 32-bit for now */
> -    return ldl_phys(cs->as, env->htab_base + pte_offset + HASH_PTE_SIZE_32/2);
> +    return ldl_phys(CPU(cpu)->as,
> +                    env->htab_base + pte_offset + HASH_PTE_SIZE_32 / 2);
>  }
>  
> -static inline void ppc_hash32_store_hpte0(CPUPPCState *env,
> +static inline void ppc_hash32_store_hpte0(PowerPCCPU *cpu,
>                                            hwaddr pte_offset, target_ulong pte0)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>  
>      assert(!env->external_htab); /* Not supported on 32-bit for now */
> -    stl_phys(cs->as, env->htab_base + pte_offset, pte0);
> +    stl_phys(CPU(cpu)->as, env->htab_base + pte_offset, pte0);
>  }
>  
> -static inline void ppc_hash32_store_hpte1(CPUPPCState *env,
> +static inline void ppc_hash32_store_hpte1(PowerPCCPU *cpu,
>                                            hwaddr pte_offset, target_ulong pte1)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>  
>      assert(!env->external_htab); /* Not supported on 32-bit for now */
> -    stl_phys(cs->as, env->htab_base + pte_offset + HASH_PTE_SIZE_32/2, pte1);
> +    stl_phys(CPU(cpu)->as,
> +             env->htab_base + pte_offset + HASH_PTE_SIZE_32 / 2, pte1);
>  }
>  
>  typedef struct {
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 34e20fa..03e25fd 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -40,8 +40,9 @@ bool kvmppc_kern_htab;
>   * SLB handling
>   */
>  
> -static ppc_slb_t *slb_lookup(CPUPPCState *env, target_ulong eaddr)
> +static ppc_slb_t *slb_lookup(PowerPCCPU *cpu, target_ulong eaddr)
>  {
> +    CPUPPCState *env = &cpu->env;
>      uint64_t esid_256M, esid_1T;
>      int n;
>  
> @@ -69,12 +70,13 @@ static ppc_slb_t *slb_lookup(CPUPPCState *env, target_ulong eaddr)
>      return NULL;
>  }
>  
> -void dump_slb(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env)
> +void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int i;
>      uint64_t slbe, slbv;
>  
> -    cpu_synchronize_state(CPU(ppc_env_get_cpu(env)));
> +    cpu_synchronize_state(CPU(cpu));
>  
>      cpu_fprintf(f, "SLB\tESID\t\t\tVSID\n");
>      for (i = 0; i < env->slb_nr; i++) {
> @@ -117,7 +119,7 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
>      PowerPCCPU *cpu = ppc_env_get_cpu(env);
>      ppc_slb_t *slb;
>  
> -    slb = slb_lookup(env, addr);
> +    slb = slb_lookup(cpu, addr);
>      if (!slb) {
>          return;
>      }
> @@ -133,8 +135,9 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
>      }
>  }
>  
> -int ppc_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int slot = rb & 0xfff;
>      ppc_slb_t *slb = &env->slb[slot];
>  
> @@ -159,9 +162,10 @@ int ppc_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
>      return 0;
>  }
>  
> -static int ppc_load_slb_esid(CPUPPCState *env, target_ulong rb,
> +static int ppc_load_slb_esid(PowerPCCPU *cpu, target_ulong rb,
>                               target_ulong *rt)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int slot = rb & 0xfff;
>      ppc_slb_t *slb = &env->slb[slot];
>  
> @@ -173,9 +177,10 @@ static int ppc_load_slb_esid(CPUPPCState *env, target_ulong rb,
>      return 0;
>  }
>  
> -static int ppc_load_slb_vsid(CPUPPCState *env, target_ulong rb,
> +static int ppc_load_slb_vsid(PowerPCCPU *cpu, target_ulong rb,
>                               target_ulong *rt)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int slot = rb & 0xfff;
>      ppc_slb_t *slb = &env->slb[slot];
>  
> @@ -189,7 +194,9 @@ static int ppc_load_slb_vsid(CPUPPCState *env, target_ulong rb,
>  
>  void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
>  {
> -    if (ppc_store_slb(env, rb, rs) < 0) {
> +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
> +
> +    if (ppc_store_slb(cpu, rb, rs) < 0) {
>          helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
>                                     POWERPC_EXCP_INVAL);
>      }
> @@ -197,9 +204,10 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
>  
>  target_ulong helper_load_slb_esid(CPUPPCState *env, target_ulong rb)
>  {
> +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
>      target_ulong rt = 0;
>  
> -    if (ppc_load_slb_esid(env, rb, &rt) < 0) {
> +    if (ppc_load_slb_esid(cpu, rb, &rt) < 0) {
>          helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
>                                     POWERPC_EXCP_INVAL);
>      }
> @@ -208,9 +216,10 @@ target_ulong helper_load_slb_esid(CPUPPCState *env, target_ulong rb)
>  
>  target_ulong helper_load_slb_vsid(CPUPPCState *env, target_ulong rb)
>  {
> +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
>      target_ulong rt = 0;
>  
> -    if (ppc_load_slb_vsid(env, rb, &rt) < 0) {
> +    if (ppc_load_slb_vsid(cpu, rb, &rt) < 0) {
>          helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
>                                     POWERPC_EXCP_INVAL);
>      }
> @@ -221,9 +230,10 @@ target_ulong helper_load_slb_vsid(CPUPPCState *env, target_ulong rb)
>   * 64-bit hash table MMU handling
>   */
>  
> -static int ppc_hash64_pte_prot(CPUPPCState *env,
> +static int ppc_hash64_pte_prot(PowerPCCPU *cpu,
>                                 ppc_slb_t *slb, ppc_hash_pte64_t pte)
>  {
> +    CPUPPCState *env = &cpu->env;
>      unsigned pp, key;
>      /* Some pp bit combinations have undefined behaviour, so default
>       * to no access in those cases */
> @@ -273,12 +283,12 @@ static int ppc_hash64_pte_prot(CPUPPCState *env,
>      return prot;
>  }
>  
> -static int ppc_hash64_amr_prot(CPUPPCState *env, ppc_hash_pte64_t pte)
> +static int ppc_hash64_amr_prot(PowerPCCPU *cpu, ppc_hash_pte64_t pte)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int key, amrbits;
>      int prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
>  
> -
>      /* Only recent MMUs implement Virtual Page Class Key Protection */
>      if (!(env->mmu_model & POWERPC_MMU_AMR)) {
>          return prot;
> @@ -347,23 +357,24 @@ void ppc_hash64_stop_access(uint64_t token)
>      }
>  }
>  
> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
> +static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>                                       bool secondary, target_ulong ptem,
>                                       ppc_hash_pte64_t *pte)
>  {
> +    CPUPPCState *env = &cpu->env;
>      int i;
>      uint64_t token;
>      target_ulong pte0, pte1;
>      target_ulong pte_index;
>  
>      pte_index = (hash & env->htab_mask) * HPTES_PER_GROUP;
> -    token = ppc_hash64_start_access(ppc_env_get_cpu(env), pte_index);
> +    token = ppc_hash64_start_access(cpu, pte_index);
>      if (!token) {
>          return -1;
>      }
>      for (i = 0; i < HPTES_PER_GROUP; i++) {
> -        pte0 = ppc_hash64_load_hpte0(env, token, i);
> -        pte1 = ppc_hash64_load_hpte1(env, token, i);
> +        pte0 = ppc_hash64_load_hpte0(cpu, token, i);
> +        pte1 = ppc_hash64_load_hpte1(cpu, token, i);
>  
>          if ((pte0 & HPTE64_V_VALID)
>              && (secondary == !!(pte0 & HPTE64_V_SECONDARY))
> @@ -399,10 +410,11 @@ static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
>      return epnshift;
>  }
>  
> -static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
> +static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>                                       ppc_slb_t *slb, target_ulong eaddr,
>                                       ppc_hash_pte64_t *pte)
>  {
> +    CPUPPCState *env = &cpu->env;
>      hwaddr pte_offset;
>      hwaddr hash;
>      uint64_t vsid, epnshift, epnmask, epn, ptem;
> @@ -435,7 +447,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
>              " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
>              " hash=" TARGET_FMT_plx "\n",
>              env->htab_base, env->htab_mask, vsid, ptem,  hash);
> -    pte_offset = ppc_hash64_pteg_search(env, hash, 0, ptem, pte);
> +    pte_offset = ppc_hash64_pteg_search(cpu, hash, 0, ptem, pte);
>  
>      if (pte_offset == -1) {
>          /* Secondary PTEG lookup */
> @@ -445,7 +457,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
>                  " hash=" TARGET_FMT_plx "\n", env->htab_base,
>                  env->htab_mask, vsid, ptem, ~hash);
>  
> -        pte_offset = ppc_hash64_pteg_search(env, ~hash, 1, ptem, pte);
> +        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, 1, ptem, pte);
>      }
>  
>      return pte_offset;
> @@ -492,7 +504,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>      }
>  
>      /* 2. Translation is on, so look up the SLB */
> -    slb = slb_lookup(env, eaddr);
> +    slb = slb_lookup(cpu, eaddr);
>  
>      if (!slb) {
>          if (rwx == 2) {
> @@ -514,7 +526,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>      }
>  
>      /* 4. Locate the PTE in the hash table */
> -    pte_offset = ppc_hash64_htab_lookup(env, slb, eaddr, &pte);
> +    pte_offset = ppc_hash64_htab_lookup(cpu, slb, eaddr, &pte);
>      if (pte_offset == -1) {
>          if (rwx == 2) {
>              cs->exception_index = POWERPC_EXCP_ISI;
> @@ -536,8 +548,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>  
>      /* 5. Check access permissions */
>  
> -    pp_prot = ppc_hash64_pte_prot(env, slb, pte);
> -    amr_prot = ppc_hash64_amr_prot(env, pte);
> +    pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
> +    amr_prot = ppc_hash64_amr_prot(cpu, pte);
>      prot = pp_prot & amr_prot;
>  
>      if ((need_prot[rwx] & ~prot) != 0) {
> @@ -580,7 +592,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>      }
>  
>      if (new_pte1 != pte.pte1) {
> -        ppc_hash64_store_hpte(env, pte_offset / HASH_PTE_SIZE_64,
> +        ppc_hash64_store_hpte(cpu, pte_offset / HASH_PTE_SIZE_64,
>                                pte.pte0, new_pte1);
>      }
>  
> @@ -594,8 +606,9 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>      return 0;
>  }
>  
> -hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
> +hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>  {
> +    CPUPPCState *env = &cpu->env;
>      ppc_slb_t *slb;
>      hwaddr pte_offset;
>      ppc_hash_pte64_t pte;
> @@ -605,12 +618,12 @@ hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
>          return addr & 0x0FFFFFFFFFFFFFFFULL;
>      }
>  
> -    slb = slb_lookup(env, addr);
> +    slb = slb_lookup(cpu, addr);
>      if (!slb) {
>          return -1;
>      }
>  
> -    pte_offset = ppc_hash64_htab_lookup(env, slb, addr, &pte);
> +    pte_offset = ppc_hash64_htab_lookup(cpu, slb, addr, &pte);
>      if (pte_offset == -1) {
>          return -1;
>      }
> @@ -618,11 +631,11 @@ hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
>      return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
>  }
>  
> -void ppc_hash64_store_hpte(CPUPPCState *env,
> +void ppc_hash64_store_hpte(PowerPCCPU *cpu,
>                             target_ulong pte_index,
>                             target_ulong pte0, target_ulong pte1)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>  
>      if (kvmppc_kern_htab) {
>          kvmppc_hash64_write_pte(env, pte_index, pte0, pte1);
> @@ -632,9 +645,10 @@ void ppc_hash64_store_hpte(CPUPPCState *env,
>      pte_index *= HASH_PTE_SIZE_64;
>      if (env->external_htab) {
>          stq_p(env->external_htab + pte_index, pte0);
> -        stq_p(env->external_htab + pte_index + HASH_PTE_SIZE_64/2, pte1);
> +        stq_p(env->external_htab + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
>      } else {
> -        stq_phys(cs->as, env->htab_base + pte_index, pte0);
> -        stq_phys(cs->as, env->htab_base + pte_index + HASH_PTE_SIZE_64/2, pte1);
> +        stq_phys(CPU(cpu)->as, env->htab_base + pte_index, pte0);
> +        stq_phys(CPU(cpu)->as,
> +                 env->htab_base + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
>      }
>  }
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 291750f..6e3de7e 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -4,12 +4,13 @@
>  #ifndef CONFIG_USER_ONLY
>  
>  #ifdef TARGET_PPC64
> -void dump_slb(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env);
> -int ppc_store_slb (CPUPPCState *env, target_ulong rb, target_ulong rs);
> -hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr);
> +void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
> +void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
> +hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
>                                  int mmu_idx);
> -void ppc_hash64_store_hpte(CPUPPCState *env, target_ulong index,
> +void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
>                             target_ulong pte0, target_ulong pte1);
>  #endif
>  
> @@ -85,31 +86,31 @@ extern bool kvmppc_kern_htab;
>  uint64_t ppc_hash64_start_access(PowerPCCPU *cpu, target_ulong pte_index);
>  void ppc_hash64_stop_access(uint64_t token);
>  
> -static inline target_ulong ppc_hash64_load_hpte0(CPUPPCState *env,
> +static inline target_ulong ppc_hash64_load_hpte0(PowerPCCPU *cpu,
>                                                   uint64_t token, int index)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>      uint64_t addr;
>  
>      addr = token + (index * HASH_PTE_SIZE_64);
>      if (env->external_htab) {
>          return  ldq_p((const void *)(uintptr_t)addr);
>      } else {
> -        return ldq_phys(cs->as, addr);
> +        return ldq_phys(CPU(cpu)->as, addr);
>      }
>  }
>  
> -static inline target_ulong ppc_hash64_load_hpte1(CPUPPCState *env,
> +static inline target_ulong ppc_hash64_load_hpte1(PowerPCCPU *cpu,
>                                                   uint64_t token, int index)
>  {
> -    CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    CPUPPCState *env = &cpu->env;
>      uint64_t addr;
>  
>      addr = token + (index * HASH_PTE_SIZE_64) + HASH_PTE_SIZE_64/2;
>      if (env->external_htab) {
>          return  ldq_p((const void *)(uintptr_t)addr);
>      } else {
> -        return ldq_phys(cs->as, addr);
> +        return ldq_phys(CPU(cpu)->as, addr);
>      }
>  }
>  
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index 5217691..0ab73bc 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -1297,7 +1297,7 @@ void dump_mmu(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env)
>      case POWERPC_MMU_2_06a:
>      case POWERPC_MMU_2_07:
>      case POWERPC_MMU_2_07a:
> -        dump_slb(f, cpu_fprintf, env);
> +        dump_slb(f, cpu_fprintf, ppc_env_get_cpu(env));
>          break;
>  #endif
>      default:
> @@ -1439,12 +1439,12 @@ hwaddr ppc_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
>      case POWERPC_MMU_2_06a:
>      case POWERPC_MMU_2_07:
>      case POWERPC_MMU_2_07a:
> -        return ppc_hash64_get_phys_page_debug(env, addr);
> +        return ppc_hash64_get_phys_page_debug(cpu, addr);
>  #endif
>  
>      case POWERPC_MMU_32B:
>      case POWERPC_MMU_601:
> -        return ppc_hash32_get_phys_page_debug(env, addr);
> +        return ppc_hash32_get_phys_page_debug(cpu, addr);
>  
>      default:
>          ;
> @@ -1510,6 +1510,7 @@ static int cpu_ppc_handle_mmu_fault(CPUPPCState *env, target_ulong address,
>                                      int rw, int mmu_idx)
>  {
>      CPUState *cs = CPU(ppc_env_get_cpu(env));
> +    PowerPCCPU *cpu = POWERPC_CPU(cs);
>      mmu_ctx_t ctx;
>      int access_type;
>      int ret = 0;
> @@ -1611,9 +1612,9 @@ static int cpu_ppc_handle_mmu_fault(CPUPPCState *env, target_ulong address,
>                  tlb_miss:
>                      env->error_code |= ctx.key << 19;
>                      env->spr[SPR_HASH1] = env->htab_base +
> -                        get_pteg_offset32(env, ctx.hash[0]);
> +                        get_pteg_offset32(cpu, ctx.hash[0]);
>                      env->spr[SPR_HASH2] = env->htab_base +
> -                        get_pteg_offset32(env, ctx.hash[1]);
> +                        get_pteg_offset32(cpu, ctx.hash[1]);
>                      break;
>                  case POWERPC_MMU_SOFT_74xx:
>                      if (rw == 1) {
> @@ -2101,7 +2102,7 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
>          /* flags = flags */
>          rs |= ((value >> 27) & 0xf) << 8;
>  
> -        ppc_store_slb(env, rb, rs);
> +        ppc_store_slb(cpu, rb, rs);
>      } else
>  #endif
>      if (env->sr[srnum] != value) {
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb David Gibson
@ 2016-01-27 17:21   ` Laurent Vivier
  2016-01-28  4:16   ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 33+ messages in thread
From: Laurent Vivier @ 2016-01-27 17:21 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, thuth, qemu-ppc, agraf, qemu-devel



On 27/01/2016 11:13, David Gibson wrote:
> ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
> Currently it takes two parameters, which contain values encoded as the
> register arguments to the slbmte instruction, one register contains the
> ESID portion of the SLBE and also the slot number, the other contains the
> VSID portion of the SLBE.
> 
> We're shortly going to want to do some SLB updates from other code where
> it is more convenient to supply the slot number and ESID separately, so
> rework this function and its callers to work this way.
> 
> As a bonus, this slightly simplifies the emulation of segment registers for
> when running a 32-bit OS on a 64-bit CPU.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Reviewed-by: Laurent Vivier <lvivier@redhat.com>

> ---
>  target-ppc/kvm.c        |  2 +-
>  target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
>  target-ppc/mmu-hash64.h |  3 ++-
>  target-ppc/mmu_helper.c | 14 +++++---------
>  4 files changed, 21 insertions(+), 22 deletions(-)
> 
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 98d7ba6..0f45380 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
>               * Only restore valid entries
>               */
>              if (rb & SLB_ESID_V) {
> -                ppc_store_slb(cpu, rb, rs);
> +                ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs);
>              }
>          }
>  #endif
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 03e25fd..6e05643 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
>      }
>  }
>  
> -int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> +                  target_ulong esid, target_ulong vsid)
>  {
>      CPUPPCState *env = &cpu->env;
> -    int slot = rb & 0xfff;
>      ppc_slb_t *slb = &env->slb[slot];
>  
> -    if (rb & (0x1000 - env->slb_nr)) {
> -        return -1; /* Reserved bits set or slot too high */
> +    if (slot >= env->slb_nr) {
> +        return -1; /* Bad slot number */
> +    }
> +    if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) {
> +        return -1; /* Reserved bits set */
>      }
> -    if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
> +    if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
>          return -1; /* Bad segment size */
>      }
> -    if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
> +    if ((vsid & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
>          return -1; /* 1T segment on MMU that doesn't support it */
>      }
>  
> -    /* Mask out the slot number as we store the entry */
> -    slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V);
> -    slb->vsid = rs;
> +    slb->esid = esid;
> +    slb->vsid = vsid;
>  
>      LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
> -            " %016" PRIx64 "\n", __func__, slot, rb, rs,
> +            " %016" PRIx64 "\n", __func__, slot, esid, vsid,
>              slb->esid, slb->vsid);
>  
>      return 0;
> @@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
>  {
>      PowerPCCPU *cpu = ppc_env_get_cpu(env);
>  
> -    if (ppc_store_slb(cpu, rb, rs) < 0) {
> +    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs) < 0) {
>          helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
>                                     POWERPC_EXCP_INVAL);
>      }
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 6e3de7e..24fd2c4 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -6,7 +6,8 @@
>  #ifdef TARGET_PPC64
>  void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
>  void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
> -int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> +                  target_ulong esid, target_ulong vsid);
>  hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
>                                  int mmu_idx);
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index 0ab73bc..c040b17 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -2088,21 +2088,17 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
>              (int)srnum, value, env->sr[srnum]);
>  #if defined(TARGET_PPC64)
>      if (env->mmu_model & POWERPC_MMU_64) {
> -        uint64_t rb = 0, rs = 0;
> +        uint64_t esid, vsid;
>  
>          /* ESID = srnum */
> -        rb |= ((uint32_t)srnum & 0xf) << 28;
> -        /* Set the valid bit */
> -        rb |= SLB_ESID_V;
> -        /* Index = ESID */
> -        rb |= (uint32_t)srnum;
> +        esid = ((uint64_t)(srnum & 0xf) << 28) | SLB_ESID_V;
>  
>          /* VSID = VSID */
> -        rs |= (value & 0xfffffff) << 12;
> +        vsid = (value & 0xfffffff) << 12;
>          /* flags = flags */
> -        rs |= ((value >> 27) & 0xf) << 8;
> +        vsid |= ((value >> 27) & 0xf) << 8;
>  
> -        ppc_store_slb(cpu, rb, rs);
> +        ppc_store_slb(cpu, srnum, esid, vsid);
>      } else
>  #endif
>      if (env->sr[srnum] != value) {
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
@ 2016-01-27 17:58   ` Laurent Vivier
  2016-01-27 23:31     ` David Gibson
  2016-01-28  4:20   ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 33+ messages in thread
From: Laurent Vivier @ 2016-01-27 17:58 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, thuth, qemu-ppc, agraf, qemu-devel



On 27/01/2016 11:13, David Gibson wrote:
> Currently both the tlbiva instruction (used on 44x chips) and the tlbie
> instruction (used on hash MMU chips) are both handled via
> ppc_tlb_invalidate_one().  This is silly, because they're invoked from
> different places, and do different things.
> 
> Clean this up by separating out the tlbiva instruction into its own
> handling.  In fact the implementation is only a stub anyway.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/helper.h     |  1 +
>  target-ppc/mmu_helper.c | 14 ++++++++++----
>  target-ppc/translate.c  |  2 +-
>  3 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/target-ppc/helper.h b/target-ppc/helper.h
> index 869be15..e5a8f7b 100644
> --- a/target-ppc/helper.h
> +++ b/target-ppc/helper.h
> @@ -544,6 +544,7 @@ DEF_HELPER_2(74xx_tlbd, void, env, tl)
>  DEF_HELPER_2(74xx_tlbi, void, env, tl)
>  DEF_HELPER_FLAGS_1(tlbia, TCG_CALL_NO_RWG, void, env)
>  DEF_HELPER_FLAGS_2(tlbie, TCG_CALL_NO_RWG, void, env, tl)
> +DEF_HELPER_FLAGS_2(tlbiva, TCG_CALL_NO_RWG, void, env, tl)
>  #if defined(TARGET_PPC64)
>  DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
>  DEF_HELPER_2(load_slb_esid, tl, env, tl)
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index 82ebe5d..e9e0edb 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -1971,10 +1971,6 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
>              ppc6xx_tlb_invalidate_virt(env, addr, 1);
>          }
>          break;
> -    case POWERPC_MMU_BOOKE:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
> -        break;
>      case POWERPC_MMU_32B:
>      case POWERPC_MMU_601:
>          /* tlbie invalidate TLBs for all segments */
> @@ -2116,6 +2112,16 @@ void helper_tlbie(CPUPPCState *env, target_ulong addr)
>      ppc_tlb_invalidate_one(env, addr);
>  }
>  
> +void helper_tlbiva(CPUPPCState *env, target_ulong addr)
> +{
> +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
> +
> +    /* tlbiva instruciton only exists on BookE */

Typo here ^^

> +    assert(env->mmu_model == POWERPC_MMU_BOOKE);
> +    /* XXX: TODO */
> +    cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
> +}
> +
>  /* Software driven TLBs management */
>  /* PowerPC 602/603 software TLB load instructions helpers */
>  static void do_6xx_tlb(CPUPPCState *env, target_ulong new_EPN, int is_code)
> diff --git a/target-ppc/translate.c b/target-ppc/translate.c
> index 4be7eaa..a05a169 100644
> --- a/target-ppc/translate.c
> +++ b/target-ppc/translate.c
> @@ -5904,7 +5904,7 @@ static void gen_tlbiva(DisasContext *ctx)
>      }
>      t0 = tcg_temp_new();
>      gen_addr_reg_index(ctx, t0);
> -    gen_helper_tlbie(cpu_env, cpu_gpr[rB(ctx->opcode)]);
> +    gen_helper_tlbiva(cpu_env, cpu_gpr[rB(ctx->opcode)]);
>      tcg_temp_free(t0);
>  #endif
>  }
> 
Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
@ 2016-01-27 18:06   ` Laurent Vivier
  2016-01-27 23:47     ` David Gibson
  2016-01-28  4:20   ` Benjamin Herrenschmidt
  2016-01-28 15:45   ` Thomas Huth
  2 siblings, 1 reply; 33+ messages in thread
From: Laurent Vivier @ 2016-01-27 18:06 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, thuth, qemu-ppc, agraf, qemu-devel



On 27/01/2016 11:13, David Gibson wrote:
> ppc_tlb_invalidate_one() has a big switch handling many different MMU
> types.  However, most of those branches can never be reached:
> 
> It is called from 3 places: from remove_hpte() and h_protect() in
> spapr_hcall.c (which always has a 64-bit hash MMU type), and from
> helper_tlbie() in mmu_helper.c.
> 
> Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
> gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
> set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
> used only on 440 and 460 models with the BookE mmu model.
> 
> These means the exhaustive list of MMU types which may call
> ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
> POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
> POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
> 
> Clean up by removing logic for all other MMU types from
> ppc_tlb_invalidate_one().
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/mmu_helper.c | 20 ++------------------
>  1 file changed, 2 insertions(+), 18 deletions(-)
> 
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index c040b17..82ebe5d 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -1971,25 +1971,10 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
>              ppc6xx_tlb_invalidate_virt(env, addr, 1);
>          }
>          break;
> -    case POWERPC_MMU_SOFT_4xx:
> -    case POWERPC_MMU_SOFT_4xx_Z:
> -        ppc4xx_tlb_invalidate_virt(env, addr, env->spr[SPR_40x_PID]);

This function is now called by no one. Perhaps it should move to the
next patch in helper_tlbiva() (according to your comments) ?

> -        break;
> -    case POWERPC_MMU_REAL:
> -        cpu_abort(CPU(cpu), "No TLB for PowerPC 4xx in real mode\n");
> -        break;
> -    case POWERPC_MMU_MPC8xx:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "MPC8xx MMU model is not implemented\n");
> -        break;
>      case POWERPC_MMU_BOOKE:
>          /* XXX: TODO */
>          cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
>          break;
> -    case POWERPC_MMU_BOOKE206:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "BookE 2.06 MMU model is not implemented\n");
> -        break;
>      case POWERPC_MMU_32B:
>      case POWERPC_MMU_601:
>          /* tlbie invalidate TLBs for all segments */
> @@ -2031,9 +2016,8 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
>          break;
>  #endif /* defined(TARGET_PPC64) */
>      default:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "Unknown MMU model\n");
> -        break;
> +        /* Should never reach here with other MMU models */
> +        assert(0);
>      }
>  #else
>      ppc_tlb_invalidate_all(env);
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  2016-01-27 17:58   ` Laurent Vivier
@ 2016-01-27 23:31     ` David Gibson
  0 siblings, 0 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 23:31 UTC (permalink / raw)
  To: Laurent Vivier; +Cc: thuth, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2599 bytes --]

On Wed, Jan 27, 2016 at 06:58:43PM +0100, Laurent Vivier wrote:
> On 27/01/2016 11:13, David Gibson wrote:
> > Currently both the tlbiva instruction (used on 44x chips) and the tlbie
> > instruction (used on hash MMU chips) are both handled via
> > ppc_tlb_invalidate_one().  This is silly, because they're invoked from
> > different places, and do different things.
> > 
> > Clean this up by separating out the tlbiva instruction into its own
> > handling.  In fact the implementation is only a stub anyway.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  target-ppc/helper.h     |  1 +
> >  target-ppc/mmu_helper.c | 14 ++++++++++----
> >  target-ppc/translate.c  |  2 +-
> >  3 files changed, 12 insertions(+), 5 deletions(-)
> > 
> > diff --git a/target-ppc/helper.h b/target-ppc/helper.h
> > index 869be15..e5a8f7b 100644
> > --- a/target-ppc/helper.h
> > +++ b/target-ppc/helper.h
> > @@ -544,6 +544,7 @@ DEF_HELPER_2(74xx_tlbd, void, env, tl)
> >  DEF_HELPER_2(74xx_tlbi, void, env, tl)
> >  DEF_HELPER_FLAGS_1(tlbia, TCG_CALL_NO_RWG, void, env)
> >  DEF_HELPER_FLAGS_2(tlbie, TCG_CALL_NO_RWG, void, env, tl)
> > +DEF_HELPER_FLAGS_2(tlbiva, TCG_CALL_NO_RWG, void, env, tl)
> >  #if defined(TARGET_PPC64)
> >  DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
> >  DEF_HELPER_2(load_slb_esid, tl, env, tl)
> > diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> > index 82ebe5d..e9e0edb 100644
> > --- a/target-ppc/mmu_helper.c
> > +++ b/target-ppc/mmu_helper.c
> > @@ -1971,10 +1971,6 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
> >              ppc6xx_tlb_invalidate_virt(env, addr, 1);
> >          }
> >          break;
> > -    case POWERPC_MMU_BOOKE:
> > -        /* XXX: TODO */
> > -        cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
> > -        break;
> >      case POWERPC_MMU_32B:
> >      case POWERPC_MMU_601:
> >          /* tlbie invalidate TLBs for all segments */
> > @@ -2116,6 +2112,16 @@ void helper_tlbie(CPUPPCState *env, target_ulong addr)
> >      ppc_tlb_invalidate_one(env, addr);
> >  }
> >  
> > +void helper_tlbiva(CPUPPCState *env, target_ulong addr)
> > +{
> > +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
> > +
> > +    /* tlbiva instruciton only exists on BookE */
> 
> Typo here ^^

Corrected, thanks.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-27 18:06   ` Laurent Vivier
@ 2016-01-27 23:47     ` David Gibson
  0 siblings, 0 replies; 33+ messages in thread
From: David Gibson @ 2016-01-27 23:47 UTC (permalink / raw)
  To: Laurent Vivier; +Cc: thuth, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2694 bytes --]

On Wed, Jan 27, 2016 at 07:06:26PM +0100, Laurent Vivier wrote:
> On 27/01/2016 11:13, David Gibson wrote:
> > ppc_tlb_invalidate_one() has a big switch handling many different MMU
> > types.  However, most of those branches can never be reached:
> > 
> > It is called from 3 places: from remove_hpte() and h_protect() in
> > spapr_hcall.c (which always has a 64-bit hash MMU type), and from
> > helper_tlbie() in mmu_helper.c.
> > 
> > Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
> > gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
> > set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
> > used only on 440 and 460 models with the BookE mmu model.
> > 
> > These means the exhaustive list of MMU types which may call
> > ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
> > POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
> > POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
> > 
> > Clean up by removing logic for all other MMU types from
> > ppc_tlb_invalidate_one().
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  target-ppc/mmu_helper.c | 20 ++------------------
> >  1 file changed, 2 insertions(+), 18 deletions(-)
> > 
> > diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> > index c040b17..82ebe5d 100644
> > --- a/target-ppc/mmu_helper.c
> > +++ b/target-ppc/mmu_helper.c
> > @@ -1971,25 +1971,10 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
> >              ppc6xx_tlb_invalidate_virt(env, addr, 1);
> >          }
> >          break;
> > -    case POWERPC_MMU_SOFT_4xx:
> > -    case POWERPC_MMU_SOFT_4xx_Z:
> > -        ppc4xx_tlb_invalidate_virt(env, addr, env->spr[SPR_40x_PID]);
> 
> This function is now called by no one.

Ah, yes.  Well, actually it was already called by no one, but now it's obvious.

> Perhaps it should move to the
> next patch in helper_tlbiva() (according to your comments) ?

Uh... I'm not exactly sure what you're suggesting.  Moving it to the
next patch doesn't really make sense - this is about the 4xx MMU type
which is *not* the same as the BookE MMU type used on 44x and 46x
(yes, that's confusing - one of the dangers of using an "xx" name).

Hmm.. not sure what to do with this - ppc4xx_tlb_invalidate_virt()
should be removed, but I don't know that it's worth respinning the
whole series just for that.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb David Gibson
  2016-01-27 17:21   ` Laurent Vivier
@ 2016-01-28  4:16   ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:16 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
> Currently it takes two parameters, which contain values encoded as
> the
> register arguments to the slbmte instruction, one register contains
> the
> ESID portion of the SLBE and also the slot number, the other contains
> the
> VSID portion of the SLBE.
> 
> We're shortly going to want to do some SLB updates from other code
> where
> it is more convenient to supply the slot number and ESID separately,
> so
> rework this function and its callers to work this way.
> 
> As a bonus, this slightly simplifies the emulation of segment
> registers for
> when running a 32-bit OS on a 64-bit CPU.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> ---
>  target-ppc/kvm.c        |  2 +-
>  target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
>  target-ppc/mmu-hash64.h |  3 ++-
>  target-ppc/mmu_helper.c | 14 +++++---------
>  4 files changed, 21 insertions(+), 22 deletions(-)
> 
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 98d7ba6..0f45380 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
>               * Only restore valid entries
>               */
>              if (rb & SLB_ESID_V) {
> -                ppc_store_slb(cpu, rb, rs);
> +                ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs);
>              }
>          }
>  #endif
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 03e25fd..6e05643 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env,
> target_ulong addr)
>      }
>  }
>  
> -int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> +                  target_ulong esid, target_ulong vsid)
>  {
>      CPUPPCState *env = &cpu->env;
> -    int slot = rb & 0xfff;
>      ppc_slb_t *slb = &env->slb[slot];
>  
> -    if (rb & (0x1000 - env->slb_nr)) {
> -        return -1; /* Reserved bits set or slot too high */
> +    if (slot >= env->slb_nr) {
> +        return -1; /* Bad slot number */
> +    }
> +    if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) {
> +        return -1; /* Reserved bits set */
>      }
> -    if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
> +    if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
>          return -1; /* Bad segment size */
>      }
> -    if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG))
> {
> +    if ((vsid & SLB_VSID_B) && !(env->mmu_model &
> POWERPC_MMU_1TSEG)) {
>          return -1; /* 1T segment on MMU that doesn't support it */
>      }
>  
> -    /* Mask out the slot number as we store the entry */
> -    slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V);
> -    slb->vsid = rs;
> +    slb->esid = esid;
> +    slb->vsid = vsid;
>  
>      LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016"
> PRIx64
> -            " %016" PRIx64 "\n", __func__, slot, rb, rs,
> +            " %016" PRIx64 "\n", __func__, slot, esid, vsid,
>              slb->esid, slb->vsid);
>  
>      return 0;
> @@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env,
> target_ulong rb, target_ulong rs)
>  {
>      PowerPCCPU *cpu = ppc_env_get_cpu(env);
>  
> -    if (ppc_store_slb(cpu, rb, rs) < 0) {
> +    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs) < 0) {
>          helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
>                                     POWERPC_EXCP_INVAL);
>      }
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 6e3de7e..24fd2c4 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -6,7 +6,8 @@
>  #ifdef TARGET_PPC64
>  void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
>  void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU
> *cpu);
> -int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong
> rs);
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> +                  target_ulong esid, target_ulong vsid);
>  hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong
> addr);
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong
> address, int rw,
>                                  int mmu_idx);
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index 0ab73bc..c040b17 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -2088,21 +2088,17 @@ void helper_store_sr(CPUPPCState *env,
> target_ulong srnum, target_ulong value)
>              (int)srnum, value, env->sr[srnum]);
>  #if defined(TARGET_PPC64)
>      if (env->mmu_model & POWERPC_MMU_64) {
> -        uint64_t rb = 0, rs = 0;
> +        uint64_t esid, vsid;
>  
>          /* ESID = srnum */
> -        rb |= ((uint32_t)srnum & 0xf) << 28;
> -        /* Set the valid bit */
> -        rb |= SLB_ESID_V;
> -        /* Index = ESID */
> -        rb |= (uint32_t)srnum;
> +        esid = ((uint64_t)(srnum & 0xf) << 28) | SLB_ESID_V;
>  
>          /* VSID = VSID */
> -        rs |= (value & 0xfffffff) << 12;
> +        vsid = (value & 0xfffffff) << 12;
>          /* flags = flags */
> -        rs |= ((value >> 27) & 0xf) << 8;
> +        vsid |= ((value >> 27) & 0xf) << 8;
>  
> -        ppc_store_slb(cpu, rb, rs);
> +        ppc_store_slb(cpu, srnum, esid, vsid);
>      } else
>  #endif
>      if (env->sr[srnum] != value) {

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 04/10] target-ppc: Rework SLB page size lookup
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 04/10] target-ppc: Rework SLB page size lookup David Gibson
@ 2016-01-28  4:17   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:17 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> Currently, the ppc_hash64_page_shift() function looks up a page size
> based
> on information in an SLB entry.  It open codes the bit translation
> for
> existing CPUs, however different CPU models can have different SLB
> encodings.  We already store those in the 'sps' table in CPUPPCState,
> but
> we don't currently enforce that that actually matches the logic in
> ppc_hash64_page_shift.
> 
> This patch reworks lookup of page size from SLB in several ways:
>   * ppc_store_slb() will now fail (triggering an illegal instruction
>     exception) if given a bad SLB page size encoding
>   * On success ppc_store_slb() stores a pointer to the relevant entry
> in
>     the page size table in the SLB entry.  This is looked up directly
> from
>     the published table of page size encodings, so can't get out ot
> sync.
>   * ppc_hash64_htab_lookup() and others now use this precached page
> size
>     information rather than decoding the SLB values
>   * Now that callers have easy access to the page_shift,
>     ppc_hash64_pte_raddr() amounts to just a deposit64(), so remove
> it and
>     have the callers use deposit64() directly.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> ---
>  target-ppc/cpu.h        |  1 +
>  target-ppc/machine.c    | 20 +++++++++++++
>  target-ppc/mmu-hash64.c | 74 +++++++++++++++++++++++--------------
> ------------
>  3 files changed, 56 insertions(+), 39 deletions(-)
> 
> diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
> index 2bc96b4..0820390 100644
> --- a/target-ppc/cpu.h
> +++ b/target-ppc/cpu.h
> @@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t;
>  struct ppc_slb_t {
>      uint64_t esid;
>      uint64_t vsid;
> +    const struct ppc_one_seg_page_size *sps;
>  };
>  
>  #define MAX_SLB_ENTRIES         64
> diff --git a/target-ppc/machine.c b/target-ppc/machine.c
> index b61c060..ca62d3e 100644
> --- a/target-ppc/machine.c
> +++ b/target-ppc/machine.c
> @@ -2,6 +2,7 @@
>  #include "hw/boards.h"
>  #include "sysemu/kvm.h"
>  #include "helper_regs.h"
> +#include "mmu-hash64.h"
>  
>  static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
>  {
> @@ -352,11 +353,30 @@ static bool slb_needed(void *opaque)
>      return (cpu->env.mmu_model & POWERPC_MMU_64);
>  }
>  
> +static int slb_post_load(void *opaque, int version_id)
> +{
> +    PowerPCCPU *cpu = opaque;
> +    CPUPPCState *env = &cpu->env;
> +    int i;
> +
> +    /* We've pulled in the raw esid and vsid values from the
> migration
> +     * stream, but we need to recompute the page size pointers */
> +    for (i = 0; i < env->slb_nr; i++) {
> +        if (ppc_store_slb(cpu, i, env->slb[i].esid, env-
> >slb[i].vsid) < 0) {
> +            /* Migration source had bad values in its SLB */
> +            return -1;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  static const VMStateDescription vmstate_slb = {
>      .name = "cpu/slb",
>      .version_id = 1,
>      .minimum_version_id = 1,
>      .needed = slb_needed,
> +    .post_load = slb_post_load,
>      .fields = (VMStateField[]) {
>          VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
>          VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES),
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 6e05643..b784791 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -19,6 +19,7 @@
>   */
>  #include "cpu.h"
>  #include "exec/helper-proto.h"
> +#include "qemu/error-report.h"
>  #include "sysemu/kvm.h"
>  #include "kvm_ppc.h"
>  #include "mmu-hash64.h"
> @@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong
> slot,
>  {
>      CPUPPCState *env = &cpu->env;
>      ppc_slb_t *slb = &env->slb[slot];
> +    const struct ppc_one_seg_page_size *sps = NULL;
> +    int i;
>  
>      if (slot >= env->slb_nr) {
>          return -1; /* Bad slot number */
> @@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong
> slot,
>          return -1; /* 1T segment on MMU that doesn't support it */
>      }
>  
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const struct ppc_one_seg_page_size *sps1 = &env->sps.sps[i];
> +
> +        if (!sps1->page_shift) {
> +            break;
> +        }
> +
> +        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> +            sps = sps1;
> +            break;
> +        }
> +    }
> +
> +    if (!sps) {
> +        error_report("Bad page size encoding in SLB store: slot
> "TARGET_FMT_lu
> +                     " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx,
> +                     slot, esid, vsid);
> +        return -1;
> +    }
> +
>      slb->esid = esid;
>      slb->vsid = vsid;
> +    slb->sps = sps;
>  
>      LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016"
> PRIx64
>              " %016" PRIx64 "\n", __func__, slot, esid, vsid,
> @@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU
> *cpu, hwaddr hash,
>      return -1;
>  }
>  
> -static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
> -{
> -    uint64_t epnshift;
> -
> -    /* Page size according to the SLB, which we use to generate the
> -     * EPN for hash table lookup..  When we implement more recent
> MMU
> -     * extensions this might be different from the actual page size
> -     * encoded in the PTE */
> -    if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_4K) {
> -        epnshift = TARGET_PAGE_BITS;
> -    } else if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_64K) {
> -        epnshift = TARGET_PAGE_BITS_64K;
> -    } else {
> -        epnshift = TARGET_PAGE_BITS_16M;
> -    }
> -    return epnshift;
> -}
> -
>  static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>                                       ppc_slb_t *slb, target_ulong
> eaddr,
>                                       ppc_hash_pte64_t *pte)
> @@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU
> *cpu,
>      CPUPPCState *env = &cpu->env;
>      hwaddr pte_offset;
>      hwaddr hash;
> -    uint64_t vsid, epnshift, epnmask, epn, ptem;
> +    uint64_t vsid, epnmask, epn, ptem;
> +
> +    /* The SLB store path should prevent any bad page size encodings
> +     * getting in there, so: */
> +    assert(slb->sps);
>  
> -    epnshift = ppc_hash64_page_shift(slb);
> -    epnmask = ~((1ULL << epnshift) - 1);
> +    epnmask = ~((1ULL << slb->sps->page_shift) - 1);
>  
>      if (slb->vsid & SLB_VSID_B) {
>          /* 1TB segment */
>          vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T;
>          epn = (eaddr & ~SEGMENT_MASK_1T) & epnmask;
> -        hash = vsid ^ (vsid << 25) ^ (epn >> epnshift);
> +        hash = vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift);
>      } else {
>          /* 256M segment */
>          vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT;
>          epn = (eaddr & ~SEGMENT_MASK_256M) & epnmask;
> -        hash = vsid ^ (epn >> epnshift);
> +        hash = vsid ^ (epn >> slb->sps->page_shift);
>      }
>      ptem = (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) &
> HPTE64_V_AVPN);
>  
> @@ -465,20 +474,6 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU
> *cpu,
>      return pte_offset;
>  }
>  
> -static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t
> pte,
> -                                   target_ulong eaddr)
> -{
> -    hwaddr mask;
> -    int target_page_bits;
> -    hwaddr rpn = pte.pte1 & HPTE64_R_RPN;
> -    /*
> -     * We support 4K, 64K and 16M now
> -     */
> -    target_page_bits = ppc_hash64_page_shift(slb);
> -    mask = (1ULL << target_page_bits) - 1;
> -    return (rpn & ~mask) | (eaddr & mask);
> -}
> -
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>                                  int rwx, int mmu_idx)
>  {
> @@ -600,7 +595,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu,
> target_ulong eaddr,
>  
>      /* 7. Determine the real address from the PTE */
>  
> -    raddr = ppc_hash64_pte_raddr(slb, pte, eaddr);
> +    raddr = deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps-
> >page_shift, eaddr);
>  
>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr &
> TARGET_PAGE_MASK,
>                   prot, mmu_idx, TARGET_PAGE_SIZE);
> @@ -630,7 +625,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU
> *cpu, target_ulong addr)
>          return -1;
>      }
>  
> -    return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
> +    return deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps-
> >page_shift, addr)
> +        & TARGET_PAGE_MASK;
>  }
>  
>  void ppc_hash64_store_hpte(PowerPCCPU *cpu,

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 05/10] target-ppc: Use actual page size encodings from HPTE
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
@ 2016-01-28  4:18   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:18 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> At present the 64-bit hash MMU code uses information from the SLB to
> determine the page size of a translation.  We do need that
> information to
> correctly look up the hash table.  However the MMU also allows a
> possibly larger page size to be encoded into the HPTE itself, which
> is used
> to populate the TLB.  At present qemu doesn't check that, and so
> doesn't
> support the MPSS "Multiple Page Size per Segment" feature.
> 
> This makes a start on allowing this, by adding an hpte_page_shift()
> function which looks up the page size of an HPTE.  We use this to
> validate
> page sizes encodings on faults, and populate the qemu TLB with larger
> page sizes when appropriate.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

(Note that we don't actually populate the QEMU TLB with alrger page
sizes, it doesn't support it ... Also it tries to keep track of
the presence of large pages to deal with targetted invalidations
but we don't do the latter on hash64 and we wouldn't need that
extra care anyway since our tlbie carries the page size as well).

Cheers,
Ben.

> ---
>  target-ppc/mmu-hash64.c | 63
> ++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 60 insertions(+), 3 deletions(-)
> 
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index b784791..ee1e8bf 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -21,6 +21,7 @@
>  #include "exec/helper-proto.h"
>  #include "qemu/error-report.h"
>  #include "sysemu/kvm.h"
> +#include "qemu/error-report.h"
>  #include "kvm_ppc.h"
>  #include "mmu-hash64.h"
>  
> @@ -474,12 +475,50 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU
> *cpu,
>      return pte_offset;
>  }
>  
> +static unsigned hpte_page_shift(const struct ppc_one_seg_page_size
> *sps,
> +    uint64_t pte0, uint64_t pte1)
> +{
> +    int i;
> +
> +    if (!(pte0 & HPTE64_V_LARGE)) {
> +        if (sps->page_shift != 12) {
> +            /* 4kiB page in a non 4kiB segment */
> +            return 0;
> +        }
> +        /* Normal 4kiB page */
> +        return 12;
> +    }
> +
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const struct ppc_one_page_size *ps = &sps->enc[i];
> +        uint64_t mask;
> +
> +        if (!ps->page_shift) {
> +            break;
> +        }
> +
> +        if (ps->page_shift == 12) {
> +            /* L bit is set so this can't be a 4kiB page */
> +            continue;
> +        }
> +
> +        mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN;
> +
> +        if ((pte1 & mask) == (ps->pte_enc << HPTE64_R_RPN_SHIFT)) {
> +            return ps->page_shift;
> +        }
> +    }
> +
> +    return 0; /* Bad page size encoding */
> +}
> +
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>                                  int rwx, int mmu_idx)
>  {
>      CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
>      ppc_slb_t *slb;
> +    unsigned apshift;
>      hwaddr pte_offset;
>      ppc_hash_pte64_t pte;
>      int pp_prot, amr_prot, prot;
> @@ -543,6 +582,18 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu,
> target_ulong eaddr,
>      qemu_log_mask(CPU_LOG_MMU,
>                  "found PTE at offset %08" HWADDR_PRIx "\n",
> pte_offset);
>  
> +    /* Validate page size encoding */
> +    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
> +    if (!apshift) {
> +        error_report("Bad page size encoding in HPTE 0x%"PRIx64" -
> 0x%"PRIx64
> +                     " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1,
> pte_offset);
> +        /* Not entirely sure what the right action here, but machine
> +         * check seems reasonable */
> +        cs->exception_index = POWERPC_EXCP_MCHECK;
> +        env->error_code = 0;
> +        return 1;
> +    }
> +
>      /* 5. Check access permissions */
>  
>      pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
> @@ -595,10 +646,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU
> *cpu, target_ulong eaddr,
>  
>      /* 7. Determine the real address from the PTE */
>  
> -    raddr = deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps-
> >page_shift, eaddr);
> +    raddr = deposit64(pte.pte1 & HPTE64_R_RPN, 0, apshift, eaddr);
>  
>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr &
> TARGET_PAGE_MASK,
> -                 prot, mmu_idx, TARGET_PAGE_SIZE);
> +                 prot, mmu_idx, 1ULL << apshift);
>  
>      return 0;
>  }
> @@ -609,6 +660,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU
> *cpu, target_ulong addr)
>      ppc_slb_t *slb;
>      hwaddr pte_offset;
>      ppc_hash_pte64_t pte;
> +    unsigned apshift;
>  
>      if (msr_dr == 0) {
>          /* In real mode the top 4 effective address bits are ignored
> */
> @@ -625,7 +677,12 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU
> *cpu, target_ulong addr)
>          return -1;
>      }
>  
> -    return deposit64(pte.pte1 & HPTE64_R_RPN, 0, slb->sps-
> >page_shift, addr)
> +    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
> +    if (!apshift) {
> +        return -1;
> +    }
> +
> +    return deposit64(pte.pte1 & HPTE64_R_RPN, 0, apshift, addr)
>          & TARGET_PAGE_MASK;
>  }
>  

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
  2016-01-27 18:06   ` Laurent Vivier
@ 2016-01-28  4:20   ` Benjamin Herrenschmidt
  2016-01-28  5:55     ` David Gibson
  2016-01-28 15:45   ` Thomas Huth
  2 siblings, 1 reply; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:20 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> ppc_tlb_invalidate_one() has a big switch handling many different MMU
> types.  However, most of those branches can never be reached:
> 
> It is called from 3 places: from remove_hpte() and h_protect() in
> spapr_hcall.c (which always has a 64-bit hash MMU type), and from
> helper_tlbie() in mmu_helper.c.
> 
> Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
> gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
> set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
> used only on 440 and 460 models with the BookE mmu model.
> 
> These means the exhaustive list of MMU types which may call
> ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
> POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B,
> POWERPC_MMU_2_03,
> POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
> 
> Clean up by removing logic for all other MMU types from
> ppc_tlb_invalidate_one().

I would argue to move hash64 out of it as well anyway. First what we do
in there is dumb, but the way I change it with lazy inval differs and
tlbie does provide additional information on server processors that
we would need should we chose to implemented fine grained invalidations
(such as the page size).

In the meantime:

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/mmu_helper.c | 20 ++------------------
>  1 file changed, 2 insertions(+), 18 deletions(-)
> 
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index c040b17..82ebe5d 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -1971,25 +1971,10 @@ void ppc_tlb_invalidate_one(CPUPPCState *env,
> target_ulong addr)
>              ppc6xx_tlb_invalidate_virt(env, addr, 1);
>          }
>          break;
> -    case POWERPC_MMU_SOFT_4xx:
> -    case POWERPC_MMU_SOFT_4xx_Z:
> -        ppc4xx_tlb_invalidate_virt(env, addr, env-
> >spr[SPR_40x_PID]);
> -        break;
> -    case POWERPC_MMU_REAL:
> -        cpu_abort(CPU(cpu), "No TLB for PowerPC 4xx in real
> mode\n");
> -        break;
> -    case POWERPC_MMU_MPC8xx:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "MPC8xx MMU model is not
> implemented\n");
> -        break;
>      case POWERPC_MMU_BOOKE:
>          /* XXX: TODO */
>          cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
>          break;
> -    case POWERPC_MMU_BOOKE206:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "BookE 2.06 MMU model is not
> implemented\n");
> -        break;
>      case POWERPC_MMU_32B:
>      case POWERPC_MMU_601:
>          /* tlbie invalidate TLBs for all segments */
> @@ -2031,9 +2016,8 @@ void ppc_tlb_invalidate_one(CPUPPCState *env,
> target_ulong addr)
>          break;
>  #endif /* defined(TARGET_PPC64) */
>      default:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "Unknown MMU model\n");
> -        break;
> +        /* Should never reach here with other MMU models */
> +        assert(0);
>      }
>  #else
>      ppc_tlb_invalidate_all(env);

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
  2016-01-27 17:58   ` Laurent Vivier
@ 2016-01-28  4:20   ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:20 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> Currently both the tlbiva instruction (used on 44x chips) and the
> tlbie
> instruction (used on hash MMU chips) are both handled via
> ppc_tlb_invalidate_one().  This is silly, because they're invoked
> from
> different places, and do different things.
> 
> Clean this up by separating out the tlbiva instruction into its own
> handling.  In fact the implementation is only a stub anyway.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> ---
>  target-ppc/helper.h     |  1 +
>  target-ppc/mmu_helper.c | 14 ++++++++++----
>  target-ppc/translate.c  |  2 +-
>  3 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/target-ppc/helper.h b/target-ppc/helper.h
> index 869be15..e5a8f7b 100644
> --- a/target-ppc/helper.h
> +++ b/target-ppc/helper.h
> @@ -544,6 +544,7 @@ DEF_HELPER_2(74xx_tlbd, void, env, tl)
>  DEF_HELPER_2(74xx_tlbi, void, env, tl)
>  DEF_HELPER_FLAGS_1(tlbia, TCG_CALL_NO_RWG, void, env)
>  DEF_HELPER_FLAGS_2(tlbie, TCG_CALL_NO_RWG, void, env, tl)
> +DEF_HELPER_FLAGS_2(tlbiva, TCG_CALL_NO_RWG, void, env, tl)
>  #if defined(TARGET_PPC64)
>  DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
>  DEF_HELPER_2(load_slb_esid, tl, env, tl)
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index 82ebe5d..e9e0edb 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -1971,10 +1971,6 @@ void ppc_tlb_invalidate_one(CPUPPCState *env,
> target_ulong addr)
>              ppc6xx_tlb_invalidate_virt(env, addr, 1);
>          }
>          break;
> -    case POWERPC_MMU_BOOKE:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
> -        break;
>      case POWERPC_MMU_32B:
>      case POWERPC_MMU_601:
>          /* tlbie invalidate TLBs for all segments */
> @@ -2116,6 +2112,16 @@ void helper_tlbie(CPUPPCState *env,
> target_ulong addr)
>      ppc_tlb_invalidate_one(env, addr);
>  }
>  
> +void helper_tlbiva(CPUPPCState *env, target_ulong addr)
> +{
> +    PowerPCCPU *cpu = ppc_env_get_cpu(env);
> +
> +    /* tlbiva instruciton only exists on BookE */
> +    assert(env->mmu_model == POWERPC_MMU_BOOKE);
> +    /* XXX: TODO */
> +    cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
> +}
> +
>  /* Software driven TLBs management */
>  /* PowerPC 602/603 software TLB load instructions helpers */
>  static void do_6xx_tlb(CPUPPCState *env, target_ulong new_EPN, int
> is_code)
> diff --git a/target-ppc/translate.c b/target-ppc/translate.c
> index 4be7eaa..a05a169 100644
> --- a/target-ppc/translate.c
> +++ b/target-ppc/translate.c
> @@ -5904,7 +5904,7 @@ static void gen_tlbiva(DisasContext *ctx)
>      }
>      t0 = tcg_temp_new();
>      gen_addr_reg_index(ctx, t0);
> -    gen_helper_tlbie(cpu_env, cpu_gpr[rB(ctx->opcode)]);
> +    gen_helper_tlbiva(cpu_env, cpu_gpr[rB(ctx->opcode)]);
>      tcg_temp_free(t0);
>  #endif
>  }

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
@ 2016-01-28  4:33   ` Benjamin Herrenschmidt
  2016-01-28  5:57     ` David Gibson
  0 siblings, 1 reply; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:33 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> When HPTEs are removed or modified by hypercalls on spapr, we need to
> invalidate the relevant pages in the qemu TLB.
> 
> Currently we do that by doing some complicated calculations to work out the
> right encoding for the tlbie instruction, then passing that to
> ppc_tlb_invalidate_one()... which totally ignores the argument and flushes
> the whole tlb.
> 
> Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c.

Should we find a better "in between" so long run we implement tlbie
properly ? IE, tlbie will give us the page size using the same encoding
as the HPTE iirc when L=1 ? To be honest the encoding of tlbie in arch
2.07 is so completely insane I have a hard time figuring it out myself
... :-)

Otherwise,

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/ppc/spapr_hcall.c    | 46 ++++-----------------------------------
> -------
>  target-ppc/mmu-hash64.c | 12 ++++++++++++
>  target-ppc/mmu-hash64.h |  3 +++
>  3 files changed, 19 insertions(+), 42 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 4707196..dedc7e0 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -37,42 +37,6 @@ static void set_spr(CPUState *cs, int spr,
> target_ulong value,
>      run_on_cpu(cs, do_spr_sync, &s);
>  }
>  
> -static target_ulong compute_tlbie_rb(target_ulong v, target_ulong r,
> -                                     target_ulong pte_index)
> -{
> -    target_ulong rb, va_low;
> -
> -    rb = (v & ~0x7fULL) << 16; /* AVA field */
> -    va_low = pte_index >> 3;
> -    if (v & HPTE64_V_SECONDARY) {
> -        va_low = ~va_low;
> -    }
> -    /* xor vsid from AVA */
> -    if (!(v & HPTE64_V_1TB_SEG)) {
> -        va_low ^= v >> 12;
> -    } else {
> -        va_low ^= v >> 24;
> -    }
> -    va_low &= 0x7ff;
> -    if (v & HPTE64_V_LARGE) {
> -        rb |= 1;                         /* L field */
> -#if 0 /* Disable that P7 specific bit for now */
> -        if (r & 0xff000) {
> -            /* non-16MB large page, must be 64k */
> -            /* (masks depend on page size) */
> -            rb |= 0x1000;                /* page encoding in LP
> field */
> -            rb |= (va_low & 0x7f) << 16; /* 7b of VA in AVA/LP field
> */
> -            rb |= (va_low & 0xfe);       /* AVAL field */
> -        }
> -#endif
> -    } else {
> -        /* 4kB page */
> -        rb |= (va_low & 0x7ff) << 12;   /* remaining 11b of AVA */
> -    }
> -    rb |= (v >> 54) & 0x300;            /* B field */
> -    return rb;
> -}
> -
>  static inline bool valid_pte_index(CPUPPCState *env, target_ulong
> pte_index)
>  {
>      /*
> @@ -198,7 +162,7 @@ static RemoveResult remove_hpte(PowerPCCPU *cpu,
> target_ulong ptex,
>  {
>      CPUPPCState *env = &cpu->env;
>      uint64_t token;
> -    target_ulong v, r, rb;
> +    target_ulong v, r;
>  
>      if (!valid_pte_index(env, ptex)) {
>          return REMOVE_PARM;
> @@ -217,8 +181,7 @@ static RemoveResult remove_hpte(PowerPCCPU *cpu,
> target_ulong ptex,
>      *vp = v;
>      *rp = r;
>      ppc_hash64_store_hpte(cpu, ptex, HPTE64_V_HPTE_DIRTY, 0);
> -    rb = compute_tlbie_rb(v, r, ptex);
> -    ppc_tlb_invalidate_one(env, rb);
> +    ppc_hash64_tlb_flush_hpte(cpu, ptex, v, r);
>      return REMOVE_SUCCESS;
>  }
>  
> @@ -322,7 +285,7 @@ static target_ulong h_protect(PowerPCCPU *cpu,
> sPAPRMachineState *spapr,
>      target_ulong pte_index = args[1];
>      target_ulong avpn = args[2];
>      uint64_t token;
> -    target_ulong v, r, rb;
> +    target_ulong v, r;
>  
>      if (!valid_pte_index(env, pte_index)) {
>          return H_PARAMETER;
> @@ -343,10 +306,9 @@ static target_ulong h_protect(PowerPCCPU *cpu,
> sPAPRMachineState *spapr,
>      r |= (flags << 55) & HPTE64_R_PP0;
>      r |= (flags << 48) & HPTE64_R_KEY_HI;
>      r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
> -    rb = compute_tlbie_rb(v, r, pte_index);
>      ppc_hash64_store_hpte(cpu, pte_index,
>                            (v & ~HPTE64_V_VALID) |
> HPTE64_V_HPTE_DIRTY, 0);
> -    ppc_tlb_invalidate_one(env, rb);
> +    ppc_hash64_tlb_flush_hpte(cpu, pte_index, v, r);
>      /* Don't need a memory barrier, due to qemu's global lock */
>      ppc_hash64_store_hpte(cpu, pte_index, v | HPTE64_V_HPTE_DIRTY,
> r);
>      return H_SUCCESS;
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index ee1e8bf..3284776 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -707,3 +707,15 @@ void ppc_hash64_store_hpte(PowerPCCPU *cpu,
>                   env->htab_base + pte_index + HASH_PTE_SIZE_64 / 2,
> pte1);
>      }
>  }
> +
> +void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
> +                               target_ulong pte_index,
> +                               target_ulong pte0, target_ulong pte1)
> +{
> +    /*
> +     * XXX: given the fact that there are too many segments to
> +     * invalidate, and we still don't have a tlb_flush_mask(env, n,
> +     * mask) in QEMU, we just invalidate all TLBs
> +     */
> +    tlb_flush(CPU(cpu), 1);
> +}
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 24fd2c4..293a951 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -13,6 +13,9 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu,
> target_ulong address, int rw,
>                                  int mmu_idx);
>  void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
>                             target_ulong pte0, target_ulong pte1);
> +void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
> +                               target_ulong pte_index,
> +                               target_ulong pte0, target_ulong
> pte1);
>  #endif
>  
>  /*

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 09/10] target-ppc: Helper to determine page size information from hpte alone
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
@ 2016-01-28  4:33   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:33 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> h_enter() in the spapr code needs to know the page size of the HPTE
> it's
> about to insert.  Unlike other paths that do this, it doesn't have
> access
> to the SLB, so at the moment it determines this with some open-coded
> tests which assume POWER7 or POWER8 page size encodings.
> 
> To make this more flexible add ppc_hash64_hpte_page_shift_noslb() to
> determine both the "base" page size per segment, and the individual
> effective page size from an HPTE alone.
> 
> This means that the spapr code should now be able to handle any page
> size
> listed in the env->sps table.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> ---
>  hw/ppc/spapr_hcall.c    | 25 ++++++-------------------
>  target-ppc/mmu-hash64.c | 35 +++++++++++++++++++++++++++++++++++
>  target-ppc/mmu-hash64.h |  3 +++
>  3 files changed, 44 insertions(+), 19 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index dedc7e0..a535c73 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -72,31 +72,18 @@ static target_ulong h_enter(PowerPCCPU *cpu,
> sPAPRMachineState *spapr,
>      target_ulong pte_index = args[1];
>      target_ulong pteh = args[2];
>      target_ulong ptel = args[3];
> -    target_ulong page_shift = 12;
> +    unsigned apshift, spshift;
>      target_ulong raddr;
>      target_ulong index;
>      uint64_t token;
>  
> -    /* only handle 4k and 16M pages for now */
> -    if (pteh & HPTE64_V_LARGE) {
> -#if 0 /* We don't support 64k pages yet */
> -        if ((ptel & 0xf000) == 0x1000) {
> -            /* 64k page */
> -        } else
> -#endif
> -        if ((ptel & 0xff000) == 0) {
> -            /* 16M page */
> -            page_shift = 24;
> -            /* lowest AVA bit must be 0 for 16M pages */
> -            if (pteh & 0x80) {
> -                return H_PARAMETER;
> -            }
> -        } else {
> -            return H_PARAMETER;
> -        }
> +    apshift = ppc_hash64_hpte_page_shift_noslb(cpu, pteh, ptel,
> &spshift);
> +    if (!apshift) {
> +        /* Bad page size encoding */
> +        return H_PARAMETER;
>      }
>  
> -    raddr = (ptel & HPTE64_R_RPN) & ~((1ULL << page_shift) - 1);
> +    raddr = (ptel & HPTE64_R_RPN) & ~((1ULL << apshift) - 1);
>  
>      if (is_ram_address(spapr, raddr)) {
>          /* Regular RAM - should have WIMG=0010 */
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 3284776..19ee942 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -512,6 +512,41 @@ static unsigned hpte_page_shift(const struct
> ppc_one_seg_page_size *sps,
>      return 0; /* Bad page size encoding */
>  }
>  
> +unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
> +                                          uint64_t pte0, uint64_t
> pte1,
> +                                          unsigned *seg_page_shift)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    int i;
> +
> +    if (!(pte0 & HPTE64_V_LARGE)) {
> +        *seg_page_shift = 12;
> +        return 12;
> +    }
> +
> +    /*
> +     * The encodings in env->sps need to be carefully chosen so that
> +     * this gives an unambiguous result.
> +     */
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const struct ppc_one_seg_page_size *sps = &env->sps.sps[i];
> +        unsigned shift;
> +
> +        if (!sps->page_shift) {
> +            break;
> +        }
> +
> +        shift = hpte_page_shift(sps, pte0, pte1);
> +        if (shift) {
> +            *seg_page_shift = sps->page_shift;
> +            return shift;
> +        }
> +    }
> +
> +    *seg_page_shift = 0;
> +    return 0;
> +}
> +
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>                                  int rwx, int mmu_idx)
>  {
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 293a951..34cf975 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -16,6 +16,9 @@ void ppc_hash64_store_hpte(PowerPCCPU *cpu,
> target_ulong index,
>  void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
>                                 target_ulong pte_index,
>                                 target_ulong pte0, target_ulong
> pte1);
> +unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
> +                                          uint64_t pte0, uint64_t
> pte1,
> +                                          unsigned *seg_page_shift);
>  #endif
>  
>  /*

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
@ 2016-01-28  4:36   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 33+ messages in thread
From: Benjamin Herrenschmidt @ 2016-01-28  4:36 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> Now that the TCG and spapr code has been extended to allow (semi-)
> arbitrary page encodings in the CPU's 'sps' table, we can add the
> many
> page sizes supported by real POWER7 and POWER8 hardware that we
> previously
> didn't support in TCG.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> ---
>  target-ppc/mmu-hash64.h     |  2 ++
>  target-ppc/translate_init.c | 32 ++++++++++++++++++++++++++++++++
>  2 files changed, 34 insertions(+)
> 
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 34cf975..ab0f86b 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -48,6 +48,8 @@ unsigned
> ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
>  #define SLB_VSID_LLP_MASK       (SLB_VSID_L | SLB_VSID_LP)
>  #define SLB_VSID_4K             0x0000000000000000ULL
>  #define SLB_VSID_64K            0x0000000000000110ULL
> +#define SLB_VSID_16M            0x0000000000000100ULL
> +#define SLB_VSID_16G            0x0000000000000120ULL
>  
>  /*
>   * Hash page table definitions
> diff --git a/target-ppc/translate_init.c b/target-
> ppc/translate_init.c
> index f6babd2..32b3679 100644
> --- a/target-ppc/translate_init.c
> +++ b/target-ppc/translate_init.c
> @@ -8104,6 +8104,36 @@ static Property powerpc_servercpu_properties[]
> = {
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  
> +#ifdef CONFIG_SOFTMMU
> +static const struct ppc_segment_page_sizes POWER7_POWER8_sps = {
> +    .sps = {
> +        {
> +            .page_shift = 12, /* 4K */
> +            .slb_enc = 0,
> +            .enc = { { .page_shift = 12, .pte_enc = 0 },
> +                     { .page_shift = 16, .pte_enc = 0x7 },
> +                     { .page_shift = 24, .pte_enc = 0x38 }, },
> +        },
> +        {
> +            .page_shift = 16, /* 64K */
> +            .slb_enc = SLB_VSID_64K,
> +            .enc = { { .page_shift = 16, .pte_enc = 0x1 },
> +                     { .page_shift = 24, .pte_enc = 0x8 }, },
> +        },
> +        {
> +            .page_shift = 24, /* 16M */
> +            .slb_enc = SLB_VSID_16M,
> +            .enc = { { .page_shift = 24, .pte_enc = 0 }, },
> +        },
> +        {
> +            .page_shift = 34, /* 16G */
> +            .slb_enc = SLB_VSID_16G,
> +            .enc = { { .page_shift = 34, .pte_enc = 0x3 }, },
> +        },
> +    }
> +};
>
> +#endif /* CONFIG_SOFTMMU */
> +
>  static void init_proc_POWER7 (CPUPPCState *env)
>  {
>      init_proc_book3s_64(env, BOOK3S_CPU_POWER7);
> @@ -8167,6 +8197,7 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void
> *data)
>      pcc->mmu_model = POWERPC_MMU_2_06;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> +    pcc->sps = &POWER7_POWER8_sps;
>  #endif
>      pcc->excp_model = POWERPC_EXCP_POWER7;
>      pcc->bus_model = PPC_FLAGS_INPUT_POWER7;
> @@ -8247,6 +8278,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void
> *data)
>      pcc->mmu_model = POWERPC_MMU_2_07;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> +    pcc->sps = &POWER7_POWER8_sps;
>  #endif
>      pcc->excp_model = POWERPC_EXCP_POWER7;
>      pcc->bus_model = PPC_FLAGS_INPUT_POWER7;

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-28  4:20   ` Benjamin Herrenschmidt
@ 2016-01-28  5:55     ` David Gibson
  0 siblings, 0 replies; 33+ messages in thread
From: David Gibson @ 2016-01-28  5:55 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 1847 bytes --]

On Thu, Jan 28, 2016 at 03:20:38PM +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> > ppc_tlb_invalidate_one() has a big switch handling many different MMU
> > types.  However, most of those branches can never be reached:
> > 
> > It is called from 3 places: from remove_hpte() and h_protect() in
> > spapr_hcall.c (which always has a 64-bit hash MMU type), and from
> > helper_tlbie() in mmu_helper.c.
> > 
> > Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
> > gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
> > set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
> > used only on 440 and 460 models with the BookE mmu model.
> > 
> > These means the exhaustive list of MMU types which may call
> > ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
> > POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B,
> > POWERPC_MMU_2_03,
> > POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
> > 
> > Clean up by removing logic for all other MMU types from
> > ppc_tlb_invalidate_one().
> 
> I would argue to move hash64 out of it as well anyway. First what we do
> in there is dumb, but the way I change it with lazy inval differs and
> tlbie does provide additional information on server processors that
> we would need should we chose to implemented fine grained invalidations
> (such as the page size).

I agree, but I didn't want to postpone the current things I'm working
on while I did that.
 
> In the meantime:
> 
> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
  2016-01-28  4:33   ` Benjamin Herrenschmidt
@ 2016-01-28  5:57     ` David Gibson
  0 siblings, 0 replies; 33+ messages in thread
From: David Gibson @ 2016-01-28  5:57 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 1458 bytes --]

On Thu, Jan 28, 2016 at 03:33:18PM +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2016-01-27 at 21:13 +1100, David Gibson wrote:
> > When HPTEs are removed or modified by hypercalls on spapr, we need to
> > invalidate the relevant pages in the qemu TLB.
> > 
> > Currently we do that by doing some complicated calculations to work out the
> > right encoding for the tlbie instruction, then passing that to
> > ppc_tlb_invalidate_one()... which totally ignores the argument and flushes
> > the whole tlb.
> > 
> > Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c.
> 
> Should we find a better "in between" so long run we implement tlbie
> properly ? IE, tlbie will give us the page size using the same encoding
> as the HPTE iirc when L=1 ? To be honest the encoding of tlbie in arch
> 2.07 is so completely insane I have a hard time figuring it out myself
> ... :-)

I'm not entirely sure what the better in-between would be.  Having the
pagesize in tlbie isn't enough on its own - the bigger problem is that
we need a way of invalidating a whole congruence class of entries in
the qemu TLB, which it doesn't currently provide a means to do.

> Otherwise,
> 
> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
  2016-01-27 18:06   ` Laurent Vivier
  2016-01-28  4:20   ` Benjamin Herrenschmidt
@ 2016-01-28 15:45   ` Thomas Huth
  2016-01-29  2:31     ` David Gibson
  2 siblings, 1 reply; 33+ messages in thread
From: Thomas Huth @ 2016-01-28 15:45 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, qemu-ppc, agraf, qemu-devel

On 27.01.2016 11:13, David Gibson wrote:
> ppc_tlb_invalidate_one() has a big switch handling many different MMU
> types.  However, most of those branches can never be reached:
> 
> It is called from 3 places: from remove_hpte() and h_protect() in
> spapr_hcall.c (which always has a 64-bit hash MMU type), and from
> helper_tlbie() in mmu_helper.c.
> 
> Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
> gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
> set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
> used only on 440 and 460 models with the BookE mmu model.
> 
> These means the exhaustive list of MMU types which may call
> ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
> POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
> POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
> 
> Clean up by removing logic for all other MMU types from
> ppc_tlb_invalidate_one().
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
...
> @@ -2031,9 +2016,8 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
>          break;
>  #endif /* defined(TARGET_PPC64) */
>      default:
> -        /* XXX: TODO */
> -        cpu_abort(CPU(cpu), "Unknown MMU model\n");
> -        break;
> +        /* Should never reach here with other MMU models */
> +        assert(0);
>      }

May I suggest to simply use "abort()" instead of "assert(0)" ?

 Thomas

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
  2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (9 preceding siblings ...)
  2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
@ 2016-01-28 20:44 ` Alexander Graf
  2016-01-29  2:36   ` David Gibson
  10 siblings, 1 reply; 33+ messages in thread
From: Alexander Graf @ 2016-01-28 20:44 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, thuth, qemu-ppc, qemu-devel



On 01/27/2016 11:13 AM, David Gibson wrote:
> Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
> involving control bits in both the SLB and HPTE.  At present we
> support a few of the options, but far fewer than real hardware.
>
> We're able to get away with that in practice, because guests use a
> device tree property to determine which page sizes are available and
> we are setting that to match.  However, the fact that the actual code
> doesn't necessarily what we put into the table of available page sizes
> is another ugliness.
>
> This series makes a number of cleanups to the page size handling.  The
> upshot is that afterwards the softmmu code operates off the same page
> size encoding table that is advertised to the guests, ensuring that
> they will be in sync.
>
> Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
> include the options allowed in hardware (including MPSS).  We can fix
> other hash MMU based CPUs in future if anyone cares enough.
>
> For a simple benchmark I timed fully booting then cleanly shutting
> down a TCG guest (RHEL7.2 userspace with a recent upstream kernel).
> Repeated 5 times on the current master branch, my current ppc-for-2.6
> branch and this branch.  It looks like it improves speed, although the
> difference is pretty much negligible:
>
> master:		2m25 2m28 2m26 2m26 2m26
> ppc-for-2.6:    2m26 2m25 2m26 2m27 2m25
> this series:    2m20 2m23 2m23 2m25 2m21
>
> Please review, and I'll fold into ppc-for-2.6 for my next pull.
>
> Changes since v1:
>    * Fix a couple of simple but serious bugs in logic
>    * Did some rudimentary benchmarking
> Changes since RFC:
>    * Moved lookup of SLB encodings table from SLB lookup time to SLB
>        store time

LGTM, apart from the comments that people already made. Please also 
provide changelogs in the individual patch files next time - it makes it 
easier for people who just try to see what changed from one version to 
another ;).

Reviewed-by: Alexander Graf <agraf@suse.de>

Also, please just double sanity check that the code after your 
conversion still works well on 32bit hosts ;). I suppose you have a 
32bit build environment by now, so that should be quite easy to pull off.


Alex
e

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-28 15:45   ` Thomas Huth
@ 2016-01-29  2:31     ` David Gibson
  0 siblings, 0 replies; 33+ messages in thread
From: David Gibson @ 2016-01-29  2:31 UTC (permalink / raw)
  To: Thomas Huth; +Cc: lvivier, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2019 bytes --]

On Thu, Jan 28, 2016 at 04:45:21PM +0100, Thomas Huth wrote:
> On 27.01.2016 11:13, David Gibson wrote:
> > ppc_tlb_invalidate_one() has a big switch handling many different MMU
> > types.  However, most of those branches can never be reached:
> > 
> > It is called from 3 places: from remove_hpte() and h_protect() in
> > spapr_hcall.c (which always has a 64-bit hash MMU type), and from
> > helper_tlbie() in mmu_helper.c.
> > 
> > Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
> > gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
> > set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
> > used only on 440 and 460 models with the BookE mmu model.
> > 
> > These means the exhaustive list of MMU types which may call
> > ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
> > POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
> > POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
> > 
> > Clean up by removing logic for all other MMU types from
> > ppc_tlb_invalidate_one().
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> ...
> > @@ -2031,9 +2016,8 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
> >          break;
> >  #endif /* defined(TARGET_PPC64) */
> >      default:
> > -        /* XXX: TODO */
> > -        cpu_abort(CPU(cpu), "Unknown MMU model\n");
> > -        break;
> > +        /* Should never reach here with other MMU models */
> > +        assert(0);
> >      }
> 
> May I suggest to simply use "abort()" instead of "assert(0)" ?

I actually prefer assert(0), because it documents that this is really
a "can't happen" rather than just "we can't cope".  It also means it
can be elided with -DNDEBUG.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
  2016-01-28 20:44 ` [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG Alexander Graf
@ 2016-01-29  2:36   ` David Gibson
  0 siblings, 0 replies; 33+ messages in thread
From: David Gibson @ 2016-01-29  2:36 UTC (permalink / raw)
  To: Alexander Graf; +Cc: lvivier, thuth, aik, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2800 bytes --]

On Thu, Jan 28, 2016 at 09:44:53PM +0100, Alexander Graf wrote:
> 
> 
> On 01/27/2016 11:13 AM, David Gibson wrote:
> >Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
> >involving control bits in both the SLB and HPTE.  At present we
> >support a few of the options, but far fewer than real hardware.
> >
> >We're able to get away with that in practice, because guests use a
> >device tree property to determine which page sizes are available and
> >we are setting that to match.  However, the fact that the actual code
> >doesn't necessarily what we put into the table of available page sizes
> >is another ugliness.
> >
> >This series makes a number of cleanups to the page size handling.  The
> >upshot is that afterwards the softmmu code operates off the same page
> >size encoding table that is advertised to the guests, ensuring that
> >they will be in sync.
> >
> >Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
> >include the options allowed in hardware (including MPSS).  We can fix
> >other hash MMU based CPUs in future if anyone cares enough.
> >
> >For a simple benchmark I timed fully booting then cleanly shutting
> >down a TCG guest (RHEL7.2 userspace with a recent upstream kernel).
> >Repeated 5 times on the current master branch, my current ppc-for-2.6
> >branch and this branch.  It looks like it improves speed, although the
> >difference is pretty much negligible:
> >
> >master:		2m25 2m28 2m26 2m26 2m26
> >ppc-for-2.6:    2m26 2m25 2m26 2m27 2m25
> >this series:    2m20 2m23 2m23 2m25 2m21
> >
> >Please review, and I'll fold into ppc-for-2.6 for my next pull.
> >
> >Changes since v1:
> >   * Fix a couple of simple but serious bugs in logic
> >   * Did some rudimentary benchmarking
> >Changes since RFC:
> >   * Moved lookup of SLB encodings table from SLB lookup time to SLB
> >       store time
> 
> LGTM, apart from the comments that people already made. Please also provide
> changelogs in the individual patch files next time - it makes it easier for
> people who just try to see what changed from one version to another ;).
> 
> Reviewed-by: Alexander Graf <agraf@suse.de>

Thanks, I've merged to ppc-for-2.6.

> Also, please just double sanity check that the code after your conversion
> still works well on 32bit hosts ;). I suppose you have a 32bit build
> environment by now, so that should be quite easy to pull off.

Yeah, will do.  I'm still pretty pissed that glib breaks the multiarch
build, which should be straightforward, but I have something workable.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2016-01-29  4:06 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-27 10:13 [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
2016-01-27 12:16   ` Thomas Huth
2016-01-27 12:55   ` Laurent Vivier
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
2016-01-27 14:11   ` Laurent Vivier
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 03/10] target-ppc: Rework ppc_store_slb David Gibson
2016-01-27 17:21   ` Laurent Vivier
2016-01-28  4:16   ` Benjamin Herrenschmidt
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 04/10] target-ppc: Rework SLB page size lookup David Gibson
2016-01-28  4:17   ` Benjamin Herrenschmidt
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
2016-01-28  4:18   ` Benjamin Herrenschmidt
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
2016-01-27 18:06   ` Laurent Vivier
2016-01-27 23:47     ` David Gibson
2016-01-28  4:20   ` Benjamin Herrenschmidt
2016-01-28  5:55     ` David Gibson
2016-01-28 15:45   ` Thomas Huth
2016-01-29  2:31     ` David Gibson
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
2016-01-27 17:58   ` Laurent Vivier
2016-01-27 23:31     ` David Gibson
2016-01-28  4:20   ` Benjamin Herrenschmidt
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
2016-01-28  4:33   ` Benjamin Herrenschmidt
2016-01-28  5:57     ` David Gibson
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
2016-01-28  4:33   ` Benjamin Herrenschmidt
2016-01-27 10:13 ` [Qemu-devel] [PATCHv2 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
2016-01-28  4:36   ` Benjamin Herrenschmidt
2016-01-28 20:44 ` [Qemu-devel] [PATCHv2 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG Alexander Graf
2016-01-29  2:36   ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.