All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
@ 2016-01-25  5:15 David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
                   ` (10 more replies)
  0 siblings, 11 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
involving control bits in both the SLB and HPTE.  At present we
support a few of the options, but far fewer than real hardware.

We're able to get away with that in practice, because guests use a
device tree property to determine which page sizes are available and
we are setting that to match.  However, the fact that the actual code
doesn't necessarily what we put into the table of available page sizes
is another ugliness.

This series makes a number of cleanups to the page size handling.  The
upshot is that afterwards the softmmu code operates off the same page
size encoding table that is advertised to the guests, ensuring that
they will be in sync.

Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
include the options allowed in hardware (including MPSS).  We can fix
other hash MMU based CPUs in future if anyone cares enough.

Please review, and I'll fold into ppc-for-2.6 for my next pull.

Changes since RFC:
  * Moved lookup of SLB encodings table from SLB lookup time to SLB
    store time

David Gibson (10):
  target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
  target-ppc: Convert mmu-hash{32,64}.[ch] from CPUPPCState to
    PowerPCCPU
  target-ppc: Rework ppc_store_slb
  target-ppc: Rework SLB page size lookup
  target-ppc: Use actual page size encodings from HPTE
  target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
  target-ppc: Helper to determine page size information from hpte alone
  target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG

 hw/ppc/spapr_hcall.c        | 102 ++++------------
 target-ppc/cpu.h            |   1 +
 target-ppc/helper.h         |   1 +
 target-ppc/kvm.c            |   2 +-
 target-ppc/kvm_ppc.h        |   5 -
 target-ppc/machine.c        |  20 ++++
 target-ppc/mmu-hash32.c     |  68 ++++++-----
 target-ppc/mmu-hash32.h     |  30 ++---
 target-ppc/mmu-hash64.c     | 286 ++++++++++++++++++++++++++++++++------------
 target-ppc/mmu-hash64.h     |  30 +++--
 target-ppc/mmu_helper.c     |  59 ++++-----
 target-ppc/translate.c      |   2 +-
 target-ppc/translate_init.c |  32 +++++
 13 files changed, 389 insertions(+), 249 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

This stub function is in the !KVM ifdef in target-ppc/kvm_ppc.h.  However
no such function exists on the KVM side, or is ever used.

I think this originally referenced a function which read host page size
information from /proc, for we we now use the KVM GET_SMMU_INFO extension
instead.

In any case, it has no function now, so remove it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/kvm_ppc.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
index 5e1333d..62406ce 100644
--- a/target-ppc/kvm_ppc.h
+++ b/target-ppc/kvm_ppc.h
@@ -98,11 +98,6 @@ static inline int kvmppc_get_hypercall(CPUPPCState *env, uint8_t *buf, int buf_l
     return -1;
 }
 
-static inline int kvmppc_read_segment_page_sizes(uint32_t *prop, int maxcells)
-{
-    return -1;
-}
-
 static inline int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level)
 {
     return -1;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb David Gibson
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Like a lot of places these files include a mixture of functions taking
both the older CPUPPCState *env and newer PowerPCCPU *cpu.  Move a step
closer to cleaning this up by standardizing on PowerPCCPU, except for the
helper_* functions which are called with the CPUPPCState * from tcg.

Callers and some related functions are updated as well, the boundaries of
what's changed here are a bit arbitrary.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_hcall.c    | 31 ++++++++++---------
 target-ppc/kvm.c        |  2 +-
 target-ppc/mmu-hash32.c | 68 +++++++++++++++++++++++------------------
 target-ppc/mmu-hash32.h | 30 ++++++++++---------
 target-ppc/mmu-hash64.c | 80 +++++++++++++++++++++++++++++--------------------
 target-ppc/mmu-hash64.h | 21 ++++++-------
 target-ppc/mmu_helper.c | 13 ++++----
 7 files changed, 136 insertions(+), 109 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index c4ae255..4707196 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -160,7 +160,7 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
         pte_index &= ~7ULL;
         token = ppc_hash64_start_access(cpu, pte_index);
         for (; index < 8; index++) {
-            if ((ppc_hash64_load_hpte0(env, token, index) & HPTE64_V_VALID) == 0) {
+            if (!(ppc_hash64_load_hpte0(cpu, token, index) & HPTE64_V_VALID)) {
                 break;
             }
         }
@@ -170,14 +170,14 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
         }
     } else {
         token = ppc_hash64_start_access(cpu, pte_index);
-        if (ppc_hash64_load_hpte0(env, token, 0) & HPTE64_V_VALID) {
+        if (ppc_hash64_load_hpte0(cpu, token, 0) & HPTE64_V_VALID) {
             ppc_hash64_stop_access(token);
             return H_PTEG_FULL;
         }
         ppc_hash64_stop_access(token);
     }
 
-    ppc_hash64_store_hpte(env, pte_index + index,
+    ppc_hash64_store_hpte(cpu, pte_index + index,
                           pteh | HPTE64_V_HPTE_DIRTY, ptel);
 
     args[0] = pte_index + index;
@@ -191,11 +191,12 @@ typedef enum {
     REMOVE_HW = 3,
 } RemoveResult;
 
-static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
+static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
                                 target_ulong avpn,
                                 target_ulong flags,
                                 target_ulong *vp, target_ulong *rp)
 {
+    CPUPPCState *env = &cpu->env;
     uint64_t token;
     target_ulong v, r, rb;
 
@@ -203,9 +204,9 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
         return REMOVE_PARM;
     }
 
-    token = ppc_hash64_start_access(ppc_env_get_cpu(env), ptex);
-    v = ppc_hash64_load_hpte0(env, token, 0);
-    r = ppc_hash64_load_hpte1(env, token, 0);
+    token = ppc_hash64_start_access(cpu, ptex);
+    v = ppc_hash64_load_hpte0(cpu, token, 0);
+    r = ppc_hash64_load_hpte1(cpu, token, 0);
     ppc_hash64_stop_access(token);
 
     if ((v & HPTE64_V_VALID) == 0 ||
@@ -215,7 +216,7 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
     }
     *vp = v;
     *rp = r;
-    ppc_hash64_store_hpte(env, ptex, HPTE64_V_HPTE_DIRTY, 0);
+    ppc_hash64_store_hpte(cpu, ptex, HPTE64_V_HPTE_DIRTY, 0);
     rb = compute_tlbie_rb(v, r, ptex);
     ppc_tlb_invalidate_one(env, rb);
     return REMOVE_SUCCESS;
@@ -224,13 +225,12 @@ static RemoveResult remove_hpte(CPUPPCState *env, target_ulong ptex,
 static target_ulong h_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
                              target_ulong opcode, target_ulong *args)
 {
-    CPUPPCState *env = &cpu->env;
     target_ulong flags = args[0];
     target_ulong pte_index = args[1];
     target_ulong avpn = args[2];
     RemoveResult ret;
 
-    ret = remove_hpte(env, pte_index, avpn, flags,
+    ret = remove_hpte(cpu, pte_index, avpn, flags,
                       &args[0], &args[1]);
 
     switch (ret) {
@@ -271,7 +271,6 @@ static target_ulong h_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
 static target_ulong h_bulk_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
                                   target_ulong opcode, target_ulong *args)
 {
-    CPUPPCState *env = &cpu->env;
     int i;
 
     for (i = 0; i < H_BULK_REMOVE_MAX_BATCH; i++) {
@@ -293,7 +292,7 @@ static target_ulong h_bulk_remove(PowerPCCPU *cpu, sPAPRMachineState *spapr,
             return H_PARAMETER;
         }
 
-        ret = remove_hpte(env, *tsh & H_BULK_REMOVE_PTEX, tsl,
+        ret = remove_hpte(cpu, *tsh & H_BULK_REMOVE_PTEX, tsl,
                           (*tsh & H_BULK_REMOVE_FLAGS) >> 26,
                           &v, &r);
 
@@ -330,8 +329,8 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     }
 
     token = ppc_hash64_start_access(cpu, pte_index);
-    v = ppc_hash64_load_hpte0(env, token, 0);
-    r = ppc_hash64_load_hpte1(env, token, 0);
+    v = ppc_hash64_load_hpte0(cpu, token, 0);
+    r = ppc_hash64_load_hpte1(cpu, token, 0);
     ppc_hash64_stop_access(token);
 
     if ((v & HPTE64_V_VALID) == 0 ||
@@ -345,11 +344,11 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     r |= (flags << 48) & HPTE64_R_KEY_HI;
     r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
     rb = compute_tlbie_rb(v, r, pte_index);
-    ppc_hash64_store_hpte(env, pte_index,
+    ppc_hash64_store_hpte(cpu, pte_index,
                           (v & ~HPTE64_V_VALID) | HPTE64_V_HPTE_DIRTY, 0);
     ppc_tlb_invalidate_one(env, rb);
     /* Don't need a memory barrier, due to qemu's global lock */
-    ppc_hash64_store_hpte(env, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
+    ppc_hash64_store_hpte(cpu, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
     return H_SUCCESS;
 }
 
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 4524999..98d7ba6 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
              * Only restore valid entries
              */
             if (rb & SLB_ESID_V) {
-                ppc_store_slb(env, rb, rs);
+                ppc_store_slb(cpu, rb, rs);
             }
         }
 #endif
diff --git a/target-ppc/mmu-hash32.c b/target-ppc/mmu-hash32.c
index a00ae3c..4497480 100644
--- a/target-ppc/mmu-hash32.c
+++ b/target-ppc/mmu-hash32.c
@@ -83,9 +83,10 @@ static int ppc_hash32_pp_prot(int key, int pp, int nx)
     return prot;
 }
 
-static int ppc_hash32_pte_prot(CPUPPCState *env,
+static int ppc_hash32_pte_prot(PowerPCCPU *cpu,
                                target_ulong sr, ppc_hash_pte32_t pte)
 {
+    CPUPPCState *env = &cpu->env;
     unsigned pp, key;
 
     key = !!(msr_pr ? (sr & SR32_KP) : (sr & SR32_KS));
@@ -94,9 +95,11 @@ static int ppc_hash32_pte_prot(CPUPPCState *env,
     return ppc_hash32_pp_prot(key, pp, !!(sr & SR32_NX));
 }
 
-static target_ulong hash32_bat_size(CPUPPCState *env,
+static target_ulong hash32_bat_size(PowerPCCPU *cpu,
                                     target_ulong batu, target_ulong batl)
 {
+    CPUPPCState *env = &cpu->env;
+
     if ((msr_pr && !(batu & BATU32_VP))
         || (!msr_pr && !(batu & BATU32_VS))) {
         return 0;
@@ -105,7 +108,7 @@ static target_ulong hash32_bat_size(CPUPPCState *env,
     return BATU32_BEPI & ~((batu & BATU32_BL) << 15);
 }
 
-static int hash32_bat_prot(CPUPPCState *env,
+static int hash32_bat_prot(PowerPCCPU *cpu,
                            target_ulong batu, target_ulong batl)
 {
     int pp, prot;
@@ -121,7 +124,7 @@ static int hash32_bat_prot(CPUPPCState *env,
     return prot;
 }
 
-static target_ulong hash32_bat_601_size(CPUPPCState *env,
+static target_ulong hash32_bat_601_size(PowerPCCPU *cpu,
                                 target_ulong batu, target_ulong batl)
 {
     if (!(batl & BATL32_601_V)) {
@@ -131,9 +134,10 @@ static target_ulong hash32_bat_601_size(CPUPPCState *env,
     return BATU32_BEPI & ~((batl & BATL32_601_BL) << 17);
 }
 
-static int hash32_bat_601_prot(CPUPPCState *env,
+static int hash32_bat_601_prot(PowerPCCPU *cpu,
                                target_ulong batu, target_ulong batl)
 {
+    CPUPPCState *env = &cpu->env;
     int key, pp;
 
     pp = batu & BATU32_601_PP;
@@ -145,9 +149,10 @@ static int hash32_bat_601_prot(CPUPPCState *env,
     return ppc_hash32_pp_prot(key, pp, 0);
 }
 
-static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
+static hwaddr ppc_hash32_bat_lookup(PowerPCCPU *cpu, target_ulong ea, int rwx,
                                     int *prot)
 {
+    CPUPPCState *env = &cpu->env;
     target_ulong *BATlt, *BATut;
     int i;
 
@@ -166,9 +171,9 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
         target_ulong mask;
 
         if (unlikely(env->mmu_model == POWERPC_MMU_601)) {
-            mask = hash32_bat_601_size(env, batu, batl);
+            mask = hash32_bat_601_size(cpu, batu, batl);
         } else {
-            mask = hash32_bat_size(env, batu, batl);
+            mask = hash32_bat_size(cpu, batu, batl);
         }
         LOG_BATS("%s: %cBAT%d v " TARGET_FMT_lx " BATu " TARGET_FMT_lx
                  " BATl " TARGET_FMT_lx "\n", __func__,
@@ -178,9 +183,9 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
             hwaddr raddr = (batl & mask) | (ea & ~mask);
 
             if (unlikely(env->mmu_model == POWERPC_MMU_601)) {
-                *prot = hash32_bat_601_prot(env, batu, batl);
+                *prot = hash32_bat_601_prot(cpu, batu, batl);
             } else {
-                *prot = hash32_bat_prot(env, batu, batl);
+                *prot = hash32_bat_prot(cpu, batu, batl);
             }
 
             return raddr & TARGET_PAGE_MASK;
@@ -209,11 +214,12 @@ static hwaddr ppc_hash32_bat_lookup(CPUPPCState *env, target_ulong ea, int rwx,
     return -1;
 }
 
-static int ppc_hash32_direct_store(CPUPPCState *env, target_ulong sr,
+static int ppc_hash32_direct_store(PowerPCCPU *cpu, target_ulong sr,
                                    target_ulong eaddr, int rwx,
                                    hwaddr *raddr, int *prot)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
     int key = !!(msr_pr ? (sr & SR32_KP) : (sr & SR32_KS));
 
     qemu_log_mask(CPU_LOG_MMU, "direct store...\n");
@@ -293,12 +299,14 @@ static int ppc_hash32_direct_store(CPUPPCState *env, target_ulong sr,
     }
 }
 
-hwaddr get_pteg_offset32(CPUPPCState *env, hwaddr hash)
+hwaddr get_pteg_offset32(PowerPCCPU *cpu, hwaddr hash)
 {
+    CPUPPCState *env = &cpu->env;
+
     return (hash * HASH_PTEG_SIZE_32) & env->htab_mask;
 }
 
-static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
+static hwaddr ppc_hash32_pteg_search(PowerPCCPU *cpu, hwaddr pteg_off,
                                      bool secondary, target_ulong ptem,
                                      ppc_hash_pte32_t *pte)
 {
@@ -307,8 +315,8 @@ static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
     int i;
 
     for (i = 0; i < HPTES_PER_GROUP; i++) {
-        pte0 = ppc_hash32_load_hpte0(env, pte_offset);
-        pte1 = ppc_hash32_load_hpte1(env, pte_offset);
+        pte0 = ppc_hash32_load_hpte0(cpu, pte_offset);
+        pte1 = ppc_hash32_load_hpte1(cpu, pte_offset);
 
         if ((pte0 & HPTE32_V_VALID)
             && (secondary == !!(pte0 & HPTE32_V_SECONDARY))
@@ -324,10 +332,11 @@ static hwaddr ppc_hash32_pteg_search(CPUPPCState *env, hwaddr pteg_off,
     return -1;
 }
 
-static hwaddr ppc_hash32_htab_lookup(CPUPPCState *env,
+static hwaddr ppc_hash32_htab_lookup(PowerPCCPU *cpu,
                                      target_ulong sr, target_ulong eaddr,
                                      ppc_hash_pte32_t *pte)
 {
+    CPUPPCState *env = &cpu->env;
     hwaddr pteg_off, pte_offset;
     hwaddr hash;
     uint32_t vsid, pgidx, ptem;
@@ -348,16 +357,16 @@ static hwaddr ppc_hash32_htab_lookup(CPUPPCState *env,
             " vsid=%" PRIx32 " ptem=%" PRIx32
             " hash=" TARGET_FMT_plx "\n",
             env->htab_base, env->htab_mask, vsid, ptem, hash);
-    pteg_off = get_pteg_offset32(env, hash);
-    pte_offset = ppc_hash32_pteg_search(env, pteg_off, 0, ptem, pte);
+    pteg_off = get_pteg_offset32(cpu, hash);
+    pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 0, ptem, pte);
     if (pte_offset == -1) {
         /* Secondary PTEG lookup */
         qemu_log_mask(CPU_LOG_MMU, "1 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
                 " vsid=%" PRIx32 " api=%" PRIx32
                 " hash=" TARGET_FMT_plx "\n", env->htab_base,
                 env->htab_mask, vsid, ptem, ~hash);
-        pteg_off = get_pteg_offset32(env, ~hash);
-        pte_offset = ppc_hash32_pteg_search(env, pteg_off, 1, ptem, pte);
+        pteg_off = get_pteg_offset32(cpu, ~hash);
+        pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 1, ptem, pte);
     }
 
     return pte_offset;
@@ -399,7 +408,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
 
     /* 2. Check Block Address Translation entries (BATs) */
     if (env->nb_BATs != 0) {
-        raddr = ppc_hash32_bat_lookup(env, eaddr, rwx, &prot);
+        raddr = ppc_hash32_bat_lookup(cpu, eaddr, rwx, &prot);
         if (raddr != -1) {
             if (need_prot[rwx] & ~prot) {
                 if (rwx == 2) {
@@ -430,7 +439,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
 
     /* 4. Handle direct store segments */
     if (sr & SR32_T) {
-        if (ppc_hash32_direct_store(env, sr, eaddr, rwx,
+        if (ppc_hash32_direct_store(cpu, sr, eaddr, rwx,
                                     &raddr, &prot) == 0) {
             tlb_set_page(cs, eaddr & TARGET_PAGE_MASK,
                          raddr & TARGET_PAGE_MASK, prot, mmu_idx,
@@ -449,7 +458,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
     }
 
     /* 6. Locate the PTE in the hash table */
-    pte_offset = ppc_hash32_htab_lookup(env, sr, eaddr, &pte);
+    pte_offset = ppc_hash32_htab_lookup(cpu, sr, eaddr, &pte);
     if (pte_offset == -1) {
         if (rwx == 2) {
             cs->exception_index = POWERPC_EXCP_ISI;
@@ -472,7 +481,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
 
     /* 7. Check access permissions */
 
-    prot = ppc_hash32_pte_prot(env, sr, pte);
+    prot = ppc_hash32_pte_prot(cpu, sr, pte);
 
     if (need_prot[rwx] & ~prot) {
         /* Access right violation */
@@ -507,7 +516,7 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
     }
 
     if (new_pte1 != pte.pte1) {
-        ppc_hash32_store_hpte1(env, pte_offset, new_pte1);
+        ppc_hash32_store_hpte1(cpu, pte_offset, new_pte1);
     }
 
     /* 9. Determine the real address from the PTE */
@@ -520,8 +529,9 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, int rwx,
     return 0;
 }
 
-hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
+hwaddr ppc_hash32_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
 {
+    CPUPPCState *env = &cpu->env;
     target_ulong sr;
     hwaddr pte_offset;
     ppc_hash_pte32_t pte;
@@ -533,7 +543,7 @@ hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
     }
 
     if (env->nb_BATs != 0) {
-        hwaddr raddr = ppc_hash32_bat_lookup(env, eaddr, 0, &prot);
+        hwaddr raddr = ppc_hash32_bat_lookup(cpu, eaddr, 0, &prot);
         if (raddr != -1) {
             return raddr;
         }
@@ -546,7 +556,7 @@ hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong eaddr)
         return -1;
     }
 
-    pte_offset = ppc_hash32_htab_lookup(env, sr, eaddr, &pte);
+    pte_offset = ppc_hash32_htab_lookup(cpu, sr, eaddr, &pte);
     if (pte_offset == -1) {
         return -1;
     }
diff --git a/target-ppc/mmu-hash32.h b/target-ppc/mmu-hash32.h
index d515d4f..afbb9dd 100644
--- a/target-ppc/mmu-hash32.h
+++ b/target-ppc/mmu-hash32.h
@@ -3,8 +3,8 @@
 
 #ifndef CONFIG_USER_ONLY
 
-hwaddr get_pteg_offset32(CPUPPCState *env, hwaddr hash);
-hwaddr ppc_hash32_get_phys_page_debug(CPUPPCState *env, target_ulong addr);
+hwaddr get_pteg_offset32(PowerPCCPU *cpu, hwaddr hash);
+hwaddr ppc_hash32_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
 int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
 
@@ -65,40 +65,42 @@ int ppc_hash32_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
 #define HPTE32_R_WIMG           0x00000078
 #define HPTE32_R_PP             0x00000003
 
-static inline target_ulong ppc_hash32_load_hpte0(CPUPPCState *env,
+static inline target_ulong ppc_hash32_load_hpte0(PowerPCCPU *cpu,
                                                  hwaddr pte_offset)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    return ldl_phys(cs->as, env->htab_base + pte_offset);
+    return ldl_phys(CPU(cpu)->as, env->htab_base + pte_offset);
 }
 
-static inline target_ulong ppc_hash32_load_hpte1(CPUPPCState *env,
+static inline target_ulong ppc_hash32_load_hpte1(PowerPCCPU *cpu,
                                                  hwaddr pte_offset)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    return ldl_phys(cs->as, env->htab_base + pte_offset + HASH_PTE_SIZE_32/2);
+    return ldl_phys(CPU(cpu)->as,
+                    env->htab_base + pte_offset + HASH_PTE_SIZE_32 / 2);
 }
 
-static inline void ppc_hash32_store_hpte0(CPUPPCState *env,
+static inline void ppc_hash32_store_hpte0(PowerPCCPU *cpu,
                                           hwaddr pte_offset, target_ulong pte0)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    stl_phys(cs->as, env->htab_base + pte_offset, pte0);
+    stl_phys(CPU(cpu)->as, env->htab_base + pte_offset, pte0);
 }
 
-static inline void ppc_hash32_store_hpte1(CPUPPCState *env,
+static inline void ppc_hash32_store_hpte1(PowerPCCPU *cpu,
                                           hwaddr pte_offset, target_ulong pte1)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     assert(!env->external_htab); /* Not supported on 32-bit for now */
-    stl_phys(cs->as, env->htab_base + pte_offset + HASH_PTE_SIZE_32/2, pte1);
+    stl_phys(CPU(cpu)->as,
+             env->htab_base + pte_offset + HASH_PTE_SIZE_32 / 2, pte1);
 }
 
 typedef struct {
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 34e20fa..03e25fd 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -40,8 +40,9 @@ bool kvmppc_kern_htab;
  * SLB handling
  */
 
-static ppc_slb_t *slb_lookup(CPUPPCState *env, target_ulong eaddr)
+static ppc_slb_t *slb_lookup(PowerPCCPU *cpu, target_ulong eaddr)
 {
+    CPUPPCState *env = &cpu->env;
     uint64_t esid_256M, esid_1T;
     int n;
 
@@ -69,12 +70,13 @@ static ppc_slb_t *slb_lookup(CPUPPCState *env, target_ulong eaddr)
     return NULL;
 }
 
-void dump_slb(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env)
+void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu)
 {
+    CPUPPCState *env = &cpu->env;
     int i;
     uint64_t slbe, slbv;
 
-    cpu_synchronize_state(CPU(ppc_env_get_cpu(env)));
+    cpu_synchronize_state(CPU(cpu));
 
     cpu_fprintf(f, "SLB\tESID\t\t\tVSID\n");
     for (i = 0; i < env->slb_nr; i++) {
@@ -117,7 +119,7 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
     PowerPCCPU *cpu = ppc_env_get_cpu(env);
     ppc_slb_t *slb;
 
-    slb = slb_lookup(env, addr);
+    slb = slb_lookup(cpu, addr);
     if (!slb) {
         return;
     }
@@ -133,8 +135,9 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
     }
 }
 
-int ppc_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
 {
+    CPUPPCState *env = &cpu->env;
     int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
@@ -159,9 +162,10 @@ int ppc_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
     return 0;
 }
 
-static int ppc_load_slb_esid(CPUPPCState *env, target_ulong rb,
+static int ppc_load_slb_esid(PowerPCCPU *cpu, target_ulong rb,
                              target_ulong *rt)
 {
+    CPUPPCState *env = &cpu->env;
     int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
@@ -173,9 +177,10 @@ static int ppc_load_slb_esid(CPUPPCState *env, target_ulong rb,
     return 0;
 }
 
-static int ppc_load_slb_vsid(CPUPPCState *env, target_ulong rb,
+static int ppc_load_slb_vsid(PowerPCCPU *cpu, target_ulong rb,
                              target_ulong *rt)
 {
+    CPUPPCState *env = &cpu->env;
     int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
@@ -189,7 +194,9 @@ static int ppc_load_slb_vsid(CPUPPCState *env, target_ulong rb,
 
 void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
 {
-    if (ppc_store_slb(env, rb, rs) < 0) {
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
+
+    if (ppc_store_slb(cpu, rb, rs) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
@@ -197,9 +204,10 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
 
 target_ulong helper_load_slb_esid(CPUPPCState *env, target_ulong rb)
 {
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
     target_ulong rt = 0;
 
-    if (ppc_load_slb_esid(env, rb, &rt) < 0) {
+    if (ppc_load_slb_esid(cpu, rb, &rt) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
@@ -208,9 +216,10 @@ target_ulong helper_load_slb_esid(CPUPPCState *env, target_ulong rb)
 
 target_ulong helper_load_slb_vsid(CPUPPCState *env, target_ulong rb)
 {
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
     target_ulong rt = 0;
 
-    if (ppc_load_slb_vsid(env, rb, &rt) < 0) {
+    if (ppc_load_slb_vsid(cpu, rb, &rt) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
@@ -221,9 +230,10 @@ target_ulong helper_load_slb_vsid(CPUPPCState *env, target_ulong rb)
  * 64-bit hash table MMU handling
  */
 
-static int ppc_hash64_pte_prot(CPUPPCState *env,
+static int ppc_hash64_pte_prot(PowerPCCPU *cpu,
                                ppc_slb_t *slb, ppc_hash_pte64_t pte)
 {
+    CPUPPCState *env = &cpu->env;
     unsigned pp, key;
     /* Some pp bit combinations have undefined behaviour, so default
      * to no access in those cases */
@@ -273,12 +283,12 @@ static int ppc_hash64_pte_prot(CPUPPCState *env,
     return prot;
 }
 
-static int ppc_hash64_amr_prot(CPUPPCState *env, ppc_hash_pte64_t pte)
+static int ppc_hash64_amr_prot(PowerPCCPU *cpu, ppc_hash_pte64_t pte)
 {
+    CPUPPCState *env = &cpu->env;
     int key, amrbits;
     int prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
 
-
     /* Only recent MMUs implement Virtual Page Class Key Protection */
     if (!(env->mmu_model & POWERPC_MMU_AMR)) {
         return prot;
@@ -347,23 +357,24 @@ void ppc_hash64_stop_access(uint64_t token)
     }
 }
 
-static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
+static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
                                      bool secondary, target_ulong ptem,
                                      ppc_hash_pte64_t *pte)
 {
+    CPUPPCState *env = &cpu->env;
     int i;
     uint64_t token;
     target_ulong pte0, pte1;
     target_ulong pte_index;
 
     pte_index = (hash & env->htab_mask) * HPTES_PER_GROUP;
-    token = ppc_hash64_start_access(ppc_env_get_cpu(env), pte_index);
+    token = ppc_hash64_start_access(cpu, pte_index);
     if (!token) {
         return -1;
     }
     for (i = 0; i < HPTES_PER_GROUP; i++) {
-        pte0 = ppc_hash64_load_hpte0(env, token, i);
-        pte1 = ppc_hash64_load_hpte1(env, token, i);
+        pte0 = ppc_hash64_load_hpte0(cpu, token, i);
+        pte1 = ppc_hash64_load_hpte1(cpu, token, i);
 
         if ((pte0 & HPTE64_V_VALID)
             && (secondary == !!(pte0 & HPTE64_V_SECONDARY))
@@ -399,10 +410,11 @@ static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
     return epnshift;
 }
 
-static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
+static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
                                      ppc_slb_t *slb, target_ulong eaddr,
                                      ppc_hash_pte64_t *pte)
 {
+    CPUPPCState *env = &cpu->env;
     hwaddr pte_offset;
     hwaddr hash;
     uint64_t vsid, epnshift, epnmask, epn, ptem;
@@ -435,7 +447,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
             " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
             " hash=" TARGET_FMT_plx "\n",
             env->htab_base, env->htab_mask, vsid, ptem,  hash);
-    pte_offset = ppc_hash64_pteg_search(env, hash, 0, ptem, pte);
+    pte_offset = ppc_hash64_pteg_search(cpu, hash, 0, ptem, pte);
 
     if (pte_offset == -1) {
         /* Secondary PTEG lookup */
@@ -445,7 +457,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env,
                 " hash=" TARGET_FMT_plx "\n", env->htab_base,
                 env->htab_mask, vsid, ptem, ~hash);
 
-        pte_offset = ppc_hash64_pteg_search(env, ~hash, 1, ptem, pte);
+        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, 1, ptem, pte);
     }
 
     return pte_offset;
@@ -492,7 +504,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     }
 
     /* 2. Translation is on, so look up the SLB */
-    slb = slb_lookup(env, eaddr);
+    slb = slb_lookup(cpu, eaddr);
 
     if (!slb) {
         if (rwx == 2) {
@@ -514,7 +526,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     }
 
     /* 4. Locate the PTE in the hash table */
-    pte_offset = ppc_hash64_htab_lookup(env, slb, eaddr, &pte);
+    pte_offset = ppc_hash64_htab_lookup(cpu, slb, eaddr, &pte);
     if (pte_offset == -1) {
         if (rwx == 2) {
             cs->exception_index = POWERPC_EXCP_ISI;
@@ -536,8 +548,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
 
     /* 5. Check access permissions */
 
-    pp_prot = ppc_hash64_pte_prot(env, slb, pte);
-    amr_prot = ppc_hash64_amr_prot(env, pte);
+    pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
+    amr_prot = ppc_hash64_amr_prot(cpu, pte);
     prot = pp_prot & amr_prot;
 
     if ((need_prot[rwx] & ~prot) != 0) {
@@ -580,7 +592,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     }
 
     if (new_pte1 != pte.pte1) {
-        ppc_hash64_store_hpte(env, pte_offset / HASH_PTE_SIZE_64,
+        ppc_hash64_store_hpte(cpu, pte_offset / HASH_PTE_SIZE_64,
                               pte.pte0, new_pte1);
     }
 
@@ -594,8 +606,9 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     return 0;
 }
 
-hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
+hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
 {
+    CPUPPCState *env = &cpu->env;
     ppc_slb_t *slb;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
@@ -605,12 +618,12 @@ hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
         return addr & 0x0FFFFFFFFFFFFFFFULL;
     }
 
-    slb = slb_lookup(env, addr);
+    slb = slb_lookup(cpu, addr);
     if (!slb) {
         return -1;
     }
 
-    pte_offset = ppc_hash64_htab_lookup(env, slb, addr, &pte);
+    pte_offset = ppc_hash64_htab_lookup(cpu, slb, addr, &pte);
     if (pte_offset == -1) {
         return -1;
     }
@@ -618,11 +631,11 @@ hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr)
     return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
 }
 
-void ppc_hash64_store_hpte(CPUPPCState *env,
+void ppc_hash64_store_hpte(PowerPCCPU *cpu,
                            target_ulong pte_index,
                            target_ulong pte0, target_ulong pte1)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
 
     if (kvmppc_kern_htab) {
         kvmppc_hash64_write_pte(env, pte_index, pte0, pte1);
@@ -632,9 +645,10 @@ void ppc_hash64_store_hpte(CPUPPCState *env,
     pte_index *= HASH_PTE_SIZE_64;
     if (env->external_htab) {
         stq_p(env->external_htab + pte_index, pte0);
-        stq_p(env->external_htab + pte_index + HASH_PTE_SIZE_64/2, pte1);
+        stq_p(env->external_htab + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
     } else {
-        stq_phys(cs->as, env->htab_base + pte_index, pte0);
-        stq_phys(cs->as, env->htab_base + pte_index + HASH_PTE_SIZE_64/2, pte1);
+        stq_phys(CPU(cpu)->as, env->htab_base + pte_index, pte0);
+        stq_phys(CPU(cpu)->as,
+                 env->htab_base + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
     }
 }
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 291750f..6e3de7e 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -4,12 +4,13 @@
 #ifndef CONFIG_USER_ONLY
 
 #ifdef TARGET_PPC64
-void dump_slb(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env);
-int ppc_store_slb (CPUPPCState *env, target_ulong rb, target_ulong rs);
-hwaddr ppc_hash64_get_phys_page_debug(CPUPPCState *env, target_ulong addr);
+void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
+void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
+hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
-void ppc_hash64_store_hpte(CPUPPCState *env, target_ulong index,
+void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
                            target_ulong pte0, target_ulong pte1);
 #endif
 
@@ -85,31 +86,31 @@ extern bool kvmppc_kern_htab;
 uint64_t ppc_hash64_start_access(PowerPCCPU *cpu, target_ulong pte_index);
 void ppc_hash64_stop_access(uint64_t token);
 
-static inline target_ulong ppc_hash64_load_hpte0(CPUPPCState *env,
+static inline target_ulong ppc_hash64_load_hpte0(PowerPCCPU *cpu,
                                                  uint64_t token, int index)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
     uint64_t addr;
 
     addr = token + (index * HASH_PTE_SIZE_64);
     if (env->external_htab) {
         return  ldq_p((const void *)(uintptr_t)addr);
     } else {
-        return ldq_phys(cs->as, addr);
+        return ldq_phys(CPU(cpu)->as, addr);
     }
 }
 
-static inline target_ulong ppc_hash64_load_hpte1(CPUPPCState *env,
+static inline target_ulong ppc_hash64_load_hpte1(PowerPCCPU *cpu,
                                                  uint64_t token, int index)
 {
-    CPUState *cs = CPU(ppc_env_get_cpu(env));
+    CPUPPCState *env = &cpu->env;
     uint64_t addr;
 
     addr = token + (index * HASH_PTE_SIZE_64) + HASH_PTE_SIZE_64/2;
     if (env->external_htab) {
         return  ldq_p((const void *)(uintptr_t)addr);
     } else {
-        return ldq_phys(cs->as, addr);
+        return ldq_phys(CPU(cpu)->as, addr);
     }
 }
 
diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index 5217691..0ab73bc 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -1297,7 +1297,7 @@ void dump_mmu(FILE *f, fprintf_function cpu_fprintf, CPUPPCState *env)
     case POWERPC_MMU_2_06a:
     case POWERPC_MMU_2_07:
     case POWERPC_MMU_2_07a:
-        dump_slb(f, cpu_fprintf, env);
+        dump_slb(f, cpu_fprintf, ppc_env_get_cpu(env));
         break;
 #endif
     default:
@@ -1439,12 +1439,12 @@ hwaddr ppc_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
     case POWERPC_MMU_2_06a:
     case POWERPC_MMU_2_07:
     case POWERPC_MMU_2_07a:
-        return ppc_hash64_get_phys_page_debug(env, addr);
+        return ppc_hash64_get_phys_page_debug(cpu, addr);
 #endif
 
     case POWERPC_MMU_32B:
     case POWERPC_MMU_601:
-        return ppc_hash32_get_phys_page_debug(env, addr);
+        return ppc_hash32_get_phys_page_debug(cpu, addr);
 
     default:
         ;
@@ -1510,6 +1510,7 @@ static int cpu_ppc_handle_mmu_fault(CPUPPCState *env, target_ulong address,
                                     int rw, int mmu_idx)
 {
     CPUState *cs = CPU(ppc_env_get_cpu(env));
+    PowerPCCPU *cpu = POWERPC_CPU(cs);
     mmu_ctx_t ctx;
     int access_type;
     int ret = 0;
@@ -1611,9 +1612,9 @@ static int cpu_ppc_handle_mmu_fault(CPUPPCState *env, target_ulong address,
                 tlb_miss:
                     env->error_code |= ctx.key << 19;
                     env->spr[SPR_HASH1] = env->htab_base +
-                        get_pteg_offset32(env, ctx.hash[0]);
+                        get_pteg_offset32(cpu, ctx.hash[0]);
                     env->spr[SPR_HASH2] = env->htab_base +
-                        get_pteg_offset32(env, ctx.hash[1]);
+                        get_pteg_offset32(cpu, ctx.hash[1]);
                     break;
                 case POWERPC_MMU_SOFT_74xx:
                     if (rw == 1) {
@@ -2101,7 +2102,7 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
         /* flags = flags */
         rs |= ((value >> 27) & 0xf) << 8;
 
-        ppc_store_slb(env, rb, rs);
+        ppc_store_slb(cpu, rb, rs);
     } else
 #endif
     if (env->sr[srnum] != value) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25 19:22   ` Alexander Graf
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup David Gibson
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
Currently it takes two parameters, which contain values encoded as the
register arguments to the slbmte instruction, one register contains the
ESID portion of the SLBE and also the slot number, the other contains the
VSID portion of the SLBE.

We're shortly going to want to do some SLB updates from other code where
it is more convenient to supply the slot number and ESID separately, so
rework this function and its callers to work this way.

As a bonus, this slightly simplifies the emulation of segment registers for
when running a 32-bit OS on a 64-bit CPU.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/kvm.c        |  2 +-
 target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
 target-ppc/mmu-hash64.h |  3 ++-
 target-ppc/mmu_helper.c | 14 +++++---------
 4 files changed, 21 insertions(+), 22 deletions(-)

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 98d7ba6..18c7ba2 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
              * Only restore valid entries
              */
             if (rb & SLB_ESID_V) {
-                ppc_store_slb(cpu, rb, rs);
+                ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs);
             }
         }
 #endif
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 03e25fd..5a6d33b 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
     }
 }
 
-int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
+                  target_ulong esid, target_ulong vsid)
 {
     CPUPPCState *env = &cpu->env;
-    int slot = rb & 0xfff;
     ppc_slb_t *slb = &env->slb[slot];
 
-    if (rb & (0x1000 - env->slb_nr)) {
-        return -1; /* Reserved bits set or slot too high */
+    if (slot >= env->slb_nr) {
+        return -1; /* Bad slot number */
+    }
+    if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) {
+        return -1; /* Reserved bits set */
     }
-    if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
+    if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
         return -1; /* Bad segment size */
     }
-    if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
+    if ((vsid & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
         return -1; /* 1T segment on MMU that doesn't support it */
     }
 
-    /* Mask out the slot number as we store the entry */
-    slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V);
-    slb->vsid = rs;
+    slb->esid = esid;
+    slb->vsid = vsid;
 
     LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
-            " %016" PRIx64 "\n", __func__, slot, rb, rs,
+            " %016" PRIx64 "\n", __func__, slot, esid, vsid,
             slb->esid, slb->vsid);
 
     return 0;
@@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
 {
     PowerPCCPU *cpu = ppc_env_get_cpu(env);
 
-    if (ppc_store_slb(cpu, rb, rs) < 0) {
+    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs) < 0) {
         helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
                                    POWERPC_EXCP_INVAL);
     }
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 6e3de7e..24fd2c4 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -6,7 +6,8 @@
 #ifdef TARGET_PPC64
 void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
 void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
-int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
+int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
+                  target_ulong esid, target_ulong vsid);
 hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index 0ab73bc..c040b17 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -2088,21 +2088,17 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
             (int)srnum, value, env->sr[srnum]);
 #if defined(TARGET_PPC64)
     if (env->mmu_model & POWERPC_MMU_64) {
-        uint64_t rb = 0, rs = 0;
+        uint64_t esid, vsid;
 
         /* ESID = srnum */
-        rb |= ((uint32_t)srnum & 0xf) << 28;
-        /* Set the valid bit */
-        rb |= SLB_ESID_V;
-        /* Index = ESID */
-        rb |= (uint32_t)srnum;
+        esid = ((uint64_t)(srnum & 0xf) << 28) | SLB_ESID_V;
 
         /* VSID = VSID */
-        rs |= (value & 0xfffffff) << 12;
+        vsid = (value & 0xfffffff) << 12;
         /* flags = flags */
-        rs |= ((value >> 27) & 0xf) << 8;
+        vsid |= ((value >> 27) & 0xf) << 8;
 
-        ppc_store_slb(cpu, rb, rs);
+        ppc_store_slb(cpu, srnum, esid, vsid);
     } else
 #endif
     if (env->sr[srnum] != value) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (2 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25 19:38   ` Alexander Graf
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Currently, the ppc_hash64_page_shift() function looks up a page size based
on information in an SLB entry.  It open codes the bit translation for
existing CPUs, however different CPU models can have different SLB
encodings.  We already store those in the 'sps' table in CPUPPCState, but
we don't currently enforce that that actually matches the logic in
ppc_hash64_page_shift.

This patch reworks lookup of page size from SLB in several ways:
  * ppc_store_slb() will now fail (triggering an illegal instruction
    exception) if given a bad SLB page size encoding
  * On success ppc_store_slb() stores a pointer to the relevant entry in
    the page size table in the SLB entry.  This is looked up directly from
    the published table of page size encodings, so can't get out ot sync.
  * ppc_hash64_htab_lookup() and others now use this precached page size
    information rather than decoding the SLB values
  * Adjust ppc_hash64_pte_raddr() to take a page shift directly
    instead of an SLB entry.  We'll be wanting the flexibility shortly.
  * Adjust ppc_hash64_pte_raddr() to take a page shift directly
    instead of an SLB entry.  Both callers now have easy access to this,
    and we'll need the flexibility shortly.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/cpu.h        |  1 +
 target-ppc/machine.c    | 20 ++++++++++++++
 target-ppc/mmu-hash64.c | 71 ++++++++++++++++++++++++++-----------------------
 3 files changed, 59 insertions(+), 33 deletions(-)

diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
index 2bc96b4..0820390 100644
--- a/target-ppc/cpu.h
+++ b/target-ppc/cpu.h
@@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t;
 struct ppc_slb_t {
     uint64_t esid;
     uint64_t vsid;
+    const struct ppc_one_seg_page_size *sps;
 };
 
 #define MAX_SLB_ENTRIES         64
diff --git a/target-ppc/machine.c b/target-ppc/machine.c
index b61c060..ca62d3e 100644
--- a/target-ppc/machine.c
+++ b/target-ppc/machine.c
@@ -2,6 +2,7 @@
 #include "hw/boards.h"
 #include "sysemu/kvm.h"
 #include "helper_regs.h"
+#include "mmu-hash64.h"
 
 static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
 {
@@ -352,11 +353,30 @@ static bool slb_needed(void *opaque)
     return (cpu->env.mmu_model & POWERPC_MMU_64);
 }
 
+static int slb_post_load(void *opaque, int version_id)
+{
+    PowerPCCPU *cpu = opaque;
+    CPUPPCState *env = &cpu->env;
+    int i;
+
+    /* We've pulled in the raw esid and vsid values from the migration
+     * stream, but we need to recompute the page size pointers */
+    for (i = 0; i < env->slb_nr; i++) {
+        if (ppc_store_slb(cpu, i, env->slb[i].esid, env->slb[i].vsid) < 0) {
+            /* Migration source had bad values in its SLB */
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
 static const VMStateDescription vmstate_slb = {
     .name = "cpu/slb",
     .version_id = 1,
     .minimum_version_id = 1,
     .needed = slb_needed,
+    .post_load = slb_post_load,
     .fields = (VMStateField[]) {
         VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
         VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES),
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 5a6d33b..28ad361 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -19,6 +19,7 @@
  */
 #include "cpu.h"
 #include "exec/helper-proto.h"
+#include "qemu/error-report.h"
 #include "sysemu/kvm.h"
 #include "kvm_ppc.h"
 #include "mmu-hash64.h"
@@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
 {
     CPUPPCState *env = &cpu->env;
     ppc_slb_t *slb = &env->slb[slot];
+    const struct ppc_one_seg_page_size *sps = NULL;
+    int i;
 
     if (slot >= env->slb_nr) {
         return -1; /* Bad slot number */
@@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
         return -1; /* 1T segment on MMU that doesn't support it */
     }
 
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const struct ppc_one_seg_page_size *sps1 = &env->sps.sps[i];
+
+        if (!sps1->page_shift) {
+            break;
+        }
+
+        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
+            sps = sps1;
+            break;
+        }
+    }
+
+    if (!sps) {
+        error_report("Bad page size encoding in SLB store: slot "TARGET_FMT_lu
+                     " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx,
+                     slot, esid, vsid);
+        return -1;
+    }
+
     slb->esid = esid;
     slb->vsid = vsid;
+    slb->sps = sps;
 
     LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
             " %016" PRIx64 "\n", __func__, slot, esid, vsid,
@@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
     return -1;
 }
 
-static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
-{
-    uint64_t epnshift;
-
-    /* Page size according to the SLB, which we use to generate the
-     * EPN for hash table lookup..  When we implement more recent MMU
-     * extensions this might be different from the actual page size
-     * encoded in the PTE */
-    if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_4K) {
-        epnshift = TARGET_PAGE_BITS;
-    } else if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_64K) {
-        epnshift = TARGET_PAGE_BITS_64K;
-    } else {
-        epnshift = TARGET_PAGE_BITS_16M;
-    }
-    return epnshift;
-}
-
 static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
                                      ppc_slb_t *slb, target_ulong eaddr,
                                      ppc_hash_pte64_t *pte)
@@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
     CPUPPCState *env = &cpu->env;
     hwaddr pte_offset;
     hwaddr hash;
-    uint64_t vsid, epnshift, epnmask, epn, ptem;
+    uint64_t vsid, epnmask, epn, ptem;
+
+    /* The SLB store path should prevent any bad page size encodings
+     * getting in there, so: */
+    assert(slb->sps);
 
-    epnshift = ppc_hash64_page_shift(slb);
-    epnmask = ~((1ULL << epnshift) - 1);
+    epnmask = ~((1ULL << slb->sps->page_shift) - 1);
 
     if (slb->vsid & SLB_VSID_B) {
         /* 1TB segment */
         vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T;
         epn = (eaddr & ~SEGMENT_MASK_1T) & epnmask;
-        hash = vsid ^ (vsid << 25) ^ (epn >> epnshift);
+        hash = vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift);
     } else {
         /* 256M segment */
         vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT;
         epn = (eaddr & ~SEGMENT_MASK_256M) & epnmask;
-        hash = vsid ^ (epn >> epnshift);
+        hash = vsid ^ (epn >> slb->sps->page_shift);
     }
     ptem = (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) & HPTE64_V_AVPN);
 
@@ -465,17 +474,12 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
     return pte_offset;
 }
 
-static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t pte,
+static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
                                    target_ulong eaddr)
 {
-    hwaddr mask;
-    int target_page_bits;
+    hwaddr mask = (1ULL << page_shift) - 1;
     hwaddr rpn = pte.pte1 & HPTE64_R_RPN;
-    /*
-     * We support 4K, 64K and 16M now
-     */
-    target_page_bits = ppc_hash64_page_shift(slb);
-    mask = (1ULL << target_page_bits) - 1;
+
     return (rpn & ~mask) | (eaddr & mask);
 }
 
@@ -600,7 +604,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
 
     /* 7. Determine the real address from the PTE */
 
-    raddr = ppc_hash64_pte_raddr(slb, pte, eaddr);
+    raddr = ppc_hash64_pte_raddr(slb->sps->page_shift, pte, eaddr);
 
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
                  prot, mmu_idx, TARGET_PAGE_SIZE);
@@ -630,7 +634,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
         return -1;
     }
 
-    return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
+    return ppc_hash64_pte_raddr(slb->sps->page_shift, pte, addr)
+        & TARGET_PAGE_MASK;
 }
 
 void ppc_hash64_store_hpte(PowerPCCPU *cpu,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (3 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25 13:26   ` David Gibson
  2016-01-25 20:18   ` Alexander Graf
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
                   ` (5 subsequent siblings)
  10 siblings, 2 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

At present the 64-bit hash MMU code uses information from the SLB to
determine the page size of a translation.  We do need that information to
correctly look up the hash table.  However the MMU also allows a
possibly larger page size to be encoded into the HPTE itself, which is used
to populate the TLB.  At present qemu doesn't check that, and so doesn't
support the MPSS "Multiple Page Size per Segment" feature.

This makes a start on allowing this, by adding an hpte_page_shift()
function which looks up the page size of an HPTE.  We use this to validate
page sizes encodings on faults, and populate the qemu TLB with larger
page sizes when appropriate.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/mmu-hash64.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 70 insertions(+), 4 deletions(-)

diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 28ad361..bcad826 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -21,6 +21,7 @@
 #include "exec/helper-proto.h"
 #include "qemu/error-report.h"
 #include "sysemu/kvm.h"
+#include "qemu/error-report.h"
 #include "kvm_ppc.h"
 #include "mmu-hash64.h"
 
@@ -474,6 +475,43 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
     return pte_offset;
 }
 
+static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
+    uint64_t pte0, uint64_t pte1)
+{
+    int i;
+
+    if (!(pte0 & HPTE64_V_LARGE)) {
+        if (sps->page_shift != 12) {
+            /* 4kiB page in a non 4kiB segment */
+            return 0;
+        }
+        /* Normal 4kiB page */
+        return 12;
+    }
+
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const struct ppc_one_page_size *ps = &sps->enc[i];
+        uint64_t mask;
+
+        if (!ps->page_shift) {
+            break;
+        }
+
+        if (ps->page_shift == 12) {
+            /* L bit is set so this can't be a 4kiB page */
+            continue;
+        }
+
+        mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN;
+
+        if ((pte1 & mask) == ps->pte_enc) {
+            return ps->page_shift;
+        }
+    }
+
+    return 0; /* Bad page size encoding */
+}
+
 static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
                                    target_ulong eaddr)
 {
@@ -489,6 +527,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
     ppc_slb_t *slb;
+    unsigned apshift;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
     int pp_prot, amr_prot, prot;
@@ -552,6 +591,28 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
     qemu_log_mask(CPU_LOG_MMU,
                 "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset);
 
+    /* Validate page size encoding */
+    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
+    if (!apshift) {
+        error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64
+                     " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset);
+        /* Treat it like a hash miss for the guest */
+        if (rwx == 2) {
+            cs->exception_index = POWERPC_EXCP_ISI;
+            env->error_code = 0x40000000;
+        } else {
+            cs->exception_index = POWERPC_EXCP_DSI;
+            env->error_code = 0;
+            env->spr[SPR_DAR] = eaddr;
+            if (rwx == 1) {
+                env->spr[SPR_DSISR] = 0x42000000;
+            } else {
+                env->spr[SPR_DSISR] = 0x40000000;
+            }
+        }
+        return 1;
+    }
+
     /* 5. Check access permissions */
 
     pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
@@ -604,10 +665,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
 
     /* 7. Determine the real address from the PTE */
 
-    raddr = ppc_hash64_pte_raddr(slb->sps->page_shift, pte, eaddr);
+    raddr = ppc_hash64_pte_raddr(apshift, pte, eaddr);
 
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
-                 prot, mmu_idx, TARGET_PAGE_SIZE);
+                 prot, mmu_idx, 1ULL << apshift);
 
     return 0;
 }
@@ -618,6 +679,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
     ppc_slb_t *slb;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
+    unsigned apshift;
 
     if (msr_dr == 0) {
         /* In real mode the top 4 effective address bits are ignored */
@@ -634,8 +696,12 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
         return -1;
     }
 
-    return ppc_hash64_pte_raddr(slb->sps->page_shift, pte, addr)
-        & TARGET_PAGE_MASK;
+    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
+    if (!apshift) {
+        return -1;
+    }
+
+    return ppc_hash64_pte_raddr(apshift, pte, addr) & TARGET_PAGE_MASK;
 }
 
 void ppc_hash64_store_hpte(PowerPCCPU *cpu,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (4 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

ppc_tlb_invalidate_one() has a big switch handling many different MMU
types.  However, most of those branches can never be reached:

It is called from 3 places: from remove_hpte() and h_protect() in
spapr_hcall.c (which always has a 64-bit hash MMU type), and from
helper_tlbie() in mmu_helper.c.

Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
gen_tlbiva.  The first two are only used with the PPC_MEM_TLBIE flag,
set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
used only on 440 and 460 models with the BookE mmu model.

These means the exhaustive list of MMU types which may call
ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.

Clean up by removing logic for all other MMU types from
ppc_tlb_invalidate_one().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/mmu_helper.c | 20 ++------------------
 1 file changed, 2 insertions(+), 18 deletions(-)

diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index c040b17..82ebe5d 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -1971,25 +1971,10 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
             ppc6xx_tlb_invalidate_virt(env, addr, 1);
         }
         break;
-    case POWERPC_MMU_SOFT_4xx:
-    case POWERPC_MMU_SOFT_4xx_Z:
-        ppc4xx_tlb_invalidate_virt(env, addr, env->spr[SPR_40x_PID]);
-        break;
-    case POWERPC_MMU_REAL:
-        cpu_abort(CPU(cpu), "No TLB for PowerPC 4xx in real mode\n");
-        break;
-    case POWERPC_MMU_MPC8xx:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "MPC8xx MMU model is not implemented\n");
-        break;
     case POWERPC_MMU_BOOKE:
         /* XXX: TODO */
         cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
         break;
-    case POWERPC_MMU_BOOKE206:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "BookE 2.06 MMU model is not implemented\n");
-        break;
     case POWERPC_MMU_32B:
     case POWERPC_MMU_601:
         /* tlbie invalidate TLBs for all segments */
@@ -2031,9 +2016,8 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
         break;
 #endif /* defined(TARGET_PPC64) */
     default:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "Unknown MMU model\n");
-        break;
+        /* Should never reach here with other MMU models */
+        assert(0);
     }
 #else
     ppc_tlb_invalidate_all(env);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (5 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Currently both the tlbiva instruction (used on 44x chips) and the tlbie
instruction (used on hash MMU chips) are both handled via
ppc_tlb_invalidate_one().  This is silly, because they're invoked from
different places, and do different things.

Clean this up by separating out the tlbiva instruction into its own
handling.  In fact the implementation is only a stub anyway.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/helper.h     |  1 +
 target-ppc/mmu_helper.c | 14 ++++++++++----
 target-ppc/translate.c  |  2 +-
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/target-ppc/helper.h b/target-ppc/helper.h
index 869be15..e5a8f7b 100644
--- a/target-ppc/helper.h
+++ b/target-ppc/helper.h
@@ -544,6 +544,7 @@ DEF_HELPER_2(74xx_tlbd, void, env, tl)
 DEF_HELPER_2(74xx_tlbi, void, env, tl)
 DEF_HELPER_FLAGS_1(tlbia, TCG_CALL_NO_RWG, void, env)
 DEF_HELPER_FLAGS_2(tlbie, TCG_CALL_NO_RWG, void, env, tl)
+DEF_HELPER_FLAGS_2(tlbiva, TCG_CALL_NO_RWG, void, env, tl)
 #if defined(TARGET_PPC64)
 DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
 DEF_HELPER_2(load_slb_esid, tl, env, tl)
diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
index 82ebe5d..e9e0edb 100644
--- a/target-ppc/mmu_helper.c
+++ b/target-ppc/mmu_helper.c
@@ -1971,10 +1971,6 @@ void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr)
             ppc6xx_tlb_invalidate_virt(env, addr, 1);
         }
         break;
-    case POWERPC_MMU_BOOKE:
-        /* XXX: TODO */
-        cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
-        break;
     case POWERPC_MMU_32B:
     case POWERPC_MMU_601:
         /* tlbie invalidate TLBs for all segments */
@@ -2116,6 +2112,16 @@ void helper_tlbie(CPUPPCState *env, target_ulong addr)
     ppc_tlb_invalidate_one(env, addr);
 }
 
+void helper_tlbiva(CPUPPCState *env, target_ulong addr)
+{
+    PowerPCCPU *cpu = ppc_env_get_cpu(env);
+
+    /* tlbiva instruciton only exists on BookE */
+    assert(env->mmu_model == POWERPC_MMU_BOOKE);
+    /* XXX: TODO */
+    cpu_abort(CPU(cpu), "BookE MMU model is not implemented\n");
+}
+
 /* Software driven TLBs management */
 /* PowerPC 602/603 software TLB load instructions helpers */
 static void do_6xx_tlb(CPUPPCState *env, target_ulong new_EPN, int is_code)
diff --git a/target-ppc/translate.c b/target-ppc/translate.c
index 4be7eaa..a05a169 100644
--- a/target-ppc/translate.c
+++ b/target-ppc/translate.c
@@ -5904,7 +5904,7 @@ static void gen_tlbiva(DisasContext *ctx)
     }
     t0 = tcg_temp_new();
     gen_addr_reg_index(ctx, t0);
-    gen_helper_tlbie(cpu_env, cpu_gpr[rB(ctx->opcode)]);
+    gen_helper_tlbiva(cpu_env, cpu_gpr[rB(ctx->opcode)]);
     tcg_temp_free(t0);
 #endif
 }
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (6 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

When HPTEs are removed or modified by hypercalls on spapr, we need to
invalidate the relevant pages in the qemu TLB.

Currently we do that by doing some complicated calculations to work out the
right encoding for the tlbie instruction, then passing that to
ppc_tlb_invalidate_one()... which totally ignores the argument and flushes
the whole tlb.

Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_hcall.c    | 46 ++++------------------------------------------
 target-ppc/mmu-hash64.c | 12 ++++++++++++
 target-ppc/mmu-hash64.h |  3 +++
 3 files changed, 19 insertions(+), 42 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 4707196..dedc7e0 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -37,42 +37,6 @@ static void set_spr(CPUState *cs, int spr, target_ulong value,
     run_on_cpu(cs, do_spr_sync, &s);
 }
 
-static target_ulong compute_tlbie_rb(target_ulong v, target_ulong r,
-                                     target_ulong pte_index)
-{
-    target_ulong rb, va_low;
-
-    rb = (v & ~0x7fULL) << 16; /* AVA field */
-    va_low = pte_index >> 3;
-    if (v & HPTE64_V_SECONDARY) {
-        va_low = ~va_low;
-    }
-    /* xor vsid from AVA */
-    if (!(v & HPTE64_V_1TB_SEG)) {
-        va_low ^= v >> 12;
-    } else {
-        va_low ^= v >> 24;
-    }
-    va_low &= 0x7ff;
-    if (v & HPTE64_V_LARGE) {
-        rb |= 1;                         /* L field */
-#if 0 /* Disable that P7 specific bit for now */
-        if (r & 0xff000) {
-            /* non-16MB large page, must be 64k */
-            /* (masks depend on page size) */
-            rb |= 0x1000;                /* page encoding in LP field */
-            rb |= (va_low & 0x7f) << 16; /* 7b of VA in AVA/LP field */
-            rb |= (va_low & 0xfe);       /* AVAL field */
-        }
-#endif
-    } else {
-        /* 4kB page */
-        rb |= (va_low & 0x7ff) << 12;   /* remaining 11b of AVA */
-    }
-    rb |= (v >> 54) & 0x300;            /* B field */
-    return rb;
-}
-
 static inline bool valid_pte_index(CPUPPCState *env, target_ulong pte_index)
 {
     /*
@@ -198,7 +162,7 @@ static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
 {
     CPUPPCState *env = &cpu->env;
     uint64_t token;
-    target_ulong v, r, rb;
+    target_ulong v, r;
 
     if (!valid_pte_index(env, ptex)) {
         return REMOVE_PARM;
@@ -217,8 +181,7 @@ static RemoveResult remove_hpte(PowerPCCPU *cpu, target_ulong ptex,
     *vp = v;
     *rp = r;
     ppc_hash64_store_hpte(cpu, ptex, HPTE64_V_HPTE_DIRTY, 0);
-    rb = compute_tlbie_rb(v, r, ptex);
-    ppc_tlb_invalidate_one(env, rb);
+    ppc_hash64_tlb_flush_hpte(cpu, ptex, v, r);
     return REMOVE_SUCCESS;
 }
 
@@ -322,7 +285,7 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     target_ulong pte_index = args[1];
     target_ulong avpn = args[2];
     uint64_t token;
-    target_ulong v, r, rb;
+    target_ulong v, r;
 
     if (!valid_pte_index(env, pte_index)) {
         return H_PARAMETER;
@@ -343,10 +306,9 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     r |= (flags << 55) & HPTE64_R_PP0;
     r |= (flags << 48) & HPTE64_R_KEY_HI;
     r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
-    rb = compute_tlbie_rb(v, r, pte_index);
     ppc_hash64_store_hpte(cpu, pte_index,
                           (v & ~HPTE64_V_VALID) | HPTE64_V_HPTE_DIRTY, 0);
-    ppc_tlb_invalidate_one(env, rb);
+    ppc_hash64_tlb_flush_hpte(cpu, pte_index, v, r);
     /* Don't need a memory barrier, due to qemu's global lock */
     ppc_hash64_store_hpte(cpu, pte_index, v | HPTE64_V_HPTE_DIRTY, r);
     return H_SUCCESS;
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index bcad826..2be04e9 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -725,3 +725,15 @@ void ppc_hash64_store_hpte(PowerPCCPU *cpu,
                  env->htab_base + pte_index + HASH_PTE_SIZE_64 / 2, pte1);
     }
 }
+
+void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
+                               target_ulong pte_index,
+                               target_ulong pte0, target_ulong pte1)
+{
+    /*
+     * XXX: given the fact that there are too many segments to
+     * invalidate, and we still don't have a tlb_flush_mask(env, n,
+     * mask) in QEMU, we just invalidate all TLBs
+     */
+    tlb_flush(CPU(cpu), 1);
+}
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 24fd2c4..293a951 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -13,6 +13,9 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
                                 int mmu_idx);
 void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
                            target_ulong pte0, target_ulong pte1);
+void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
+                               target_ulong pte_index,
+                               target_ulong pte0, target_ulong pte1);
 #endif
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 09/10] target-ppc: Helper to determine page size information from hpte alone
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (7 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
  2016-01-25 11:10 ` [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

h_enter() in the spapr code needs to know the page size of the HPTE it's
about to insert.  Unlike other paths that do this, it doesn't have access
to the SLB, so at the moment it determines this with some open-coded
tests which assume POWER7 or POWER8 page size encodings.

To make this more flexible add ppc_hash64_hpte_page_shift_noslb() to
determine both the "base" page size per segment, and the individual
effective page size from an HPTE alone.

This means that the spapr code should now be able to handle any page size
listed in the env->sps table.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_hcall.c    | 25 ++++++-------------------
 target-ppc/mmu-hash64.c | 35 +++++++++++++++++++++++++++++++++++
 target-ppc/mmu-hash64.h |  3 +++
 3 files changed, 44 insertions(+), 19 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index dedc7e0..a535c73 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -72,31 +72,18 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPRMachineState *spapr,
     target_ulong pte_index = args[1];
     target_ulong pteh = args[2];
     target_ulong ptel = args[3];
-    target_ulong page_shift = 12;
+    unsigned apshift, spshift;
     target_ulong raddr;
     target_ulong index;
     uint64_t token;
 
-    /* only handle 4k and 16M pages for now */
-    if (pteh & HPTE64_V_LARGE) {
-#if 0 /* We don't support 64k pages yet */
-        if ((ptel & 0xf000) == 0x1000) {
-            /* 64k page */
-        } else
-#endif
-        if ((ptel & 0xff000) == 0) {
-            /* 16M page */
-            page_shift = 24;
-            /* lowest AVA bit must be 0 for 16M pages */
-            if (pteh & 0x80) {
-                return H_PARAMETER;
-            }
-        } else {
-            return H_PARAMETER;
-        }
+    apshift = ppc_hash64_hpte_page_shift_noslb(cpu, pteh, ptel, &spshift);
+    if (!apshift) {
+        /* Bad page size encoding */
+        return H_PARAMETER;
     }
 
-    raddr = (ptel & HPTE64_R_RPN) & ~((1ULL << page_shift) - 1);
+    raddr = (ptel & HPTE64_R_RPN) & ~((1ULL << apshift) - 1);
 
     if (is_ram_address(spapr, raddr)) {
         /* Regular RAM - should have WIMG=0010 */
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 2be04e9..db57a63 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -512,6 +512,41 @@ static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
     return 0; /* Bad page size encoding */
 }
 
+unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
+                                          uint64_t pte0, uint64_t pte1,
+                                          unsigned *seg_page_shift)
+{
+    CPUPPCState *env = &cpu->env;
+    int i;
+
+    if (!(pte0 & HPTE64_V_LARGE)) {
+        *seg_page_shift = 12;
+        return 12;
+    }
+
+    /*
+     * The encodings in env->sps need to be carefully chosen so that
+     * this gives an unambiguous result.
+     */
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const struct ppc_one_seg_page_size *sps = &env->sps.sps[i];
+        unsigned shift;
+
+        if (!sps->page_shift) {
+            break;
+        }
+
+        shift = hpte_page_shift(sps, pte0, pte1);
+        if (shift) {
+            *seg_page_shift = sps->page_shift;
+            return shift;
+        }
+    }
+
+    *seg_page_shift = 0;
+    return 0;
+}
+
 static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
                                    target_ulong eaddr)
 {
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 293a951..34cf975 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -16,6 +16,9 @@ void ppc_hash64_store_hpte(PowerPCCPU *cpu, target_ulong index,
 void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu,
                                target_ulong pte_index,
                                target_ulong pte0, target_ulong pte1);
+unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
+                                          uint64_t pte0, uint64_t pte1,
+                                          unsigned *seg_page_shift);
 #endif
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (8 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
@ 2016-01-25  5:15 ` David Gibson
  2016-01-25 11:10 ` [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
  10 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25  5:15 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc, David Gibson

Now that the TCG and spapr code has been extended to allow (semi-)
arbitrary page encodings in the CPU's 'sps' table, we can add the many
page sizes supported by real POWER7 and POWER8 hardware that we previously
didn't support in TCG.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/mmu-hash64.h     |  2 ++
 target-ppc/translate_init.c | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 34cf975..ab0f86b 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -48,6 +48,8 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
 #define SLB_VSID_LLP_MASK       (SLB_VSID_L | SLB_VSID_LP)
 #define SLB_VSID_4K             0x0000000000000000ULL
 #define SLB_VSID_64K            0x0000000000000110ULL
+#define SLB_VSID_16M            0x0000000000000100ULL
+#define SLB_VSID_16G            0x0000000000000120ULL
 
 /*
  * Hash page table definitions
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index f6babd2..32b3679 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -8104,6 +8104,36 @@ static Property powerpc_servercpu_properties[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+#ifdef CONFIG_SOFTMMU
+static const struct ppc_segment_page_sizes POWER7_POWER8_sps = {
+    .sps = {
+        {
+            .page_shift = 12, /* 4K */
+            .slb_enc = 0,
+            .enc = { { .page_shift = 12, .pte_enc = 0 },
+                     { .page_shift = 16, .pte_enc = 0x7 },
+                     { .page_shift = 24, .pte_enc = 0x38 }, },
+        },
+        {
+            .page_shift = 16, /* 64K */
+            .slb_enc = SLB_VSID_64K,
+            .enc = { { .page_shift = 16, .pte_enc = 0x1 },
+                     { .page_shift = 24, .pte_enc = 0x8 }, },
+        },
+        {
+            .page_shift = 24, /* 16M */
+            .slb_enc = SLB_VSID_16M,
+            .enc = { { .page_shift = 24, .pte_enc = 0 }, },
+        },
+        {
+            .page_shift = 34, /* 16G */
+            .slb_enc = SLB_VSID_16G,
+            .enc = { { .page_shift = 34, .pte_enc = 0x3 }, },
+        },
+    }
+};
+#endif /* CONFIG_SOFTMMU */
+
 static void init_proc_POWER7 (CPUPPCState *env)
 {
     init_proc_book3s_64(env, BOOK3S_CPU_POWER7);
@@ -8167,6 +8197,7 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
     pcc->mmu_model = POWERPC_MMU_2_06;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
+    pcc->sps = &POWER7_POWER8_sps;
 #endif
     pcc->excp_model = POWERPC_EXCP_POWER7;
     pcc->bus_model = PPC_FLAGS_INPUT_POWER7;
@@ -8247,6 +8278,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
     pcc->mmu_model = POWERPC_MMU_2_07;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
+    pcc->sps = &POWER7_POWER8_sps;
 #endif
     pcc->excp_model = POWERPC_EXCP_POWER7;
     pcc->bus_model = PPC_FLAGS_INPUT_POWER7;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
  2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
                   ` (9 preceding siblings ...)
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
@ 2016-01-25 11:10 ` David Gibson
  2016-01-25 20:36   ` Alexander Graf
  10 siblings, 1 reply; 24+ messages in thread
From: David Gibson @ 2016-01-25 11:10 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2886 bytes --]

On Mon, Jan 25, 2016 at 04:15:42PM +1100, David Gibson wrote:
> Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
> involving control bits in both the SLB and HPTE.  At present we
> support a few of the options, but far fewer than real hardware.
> 
> We're able to get away with that in practice, because guests use a
> device tree property to determine which page sizes are available and
> we are setting that to match.  However, the fact that the actual code
> doesn't necessarily what we put into the table of available page sizes
> is another ugliness.
> 
> This series makes a number of cleanups to the page size handling.  The
> upshot is that afterwards the softmmu code operates off the same page
> size encoding table that is advertised to the guests, ensuring that
> they will be in sync.
> 
> Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
> include the options allowed in hardware (including MPSS).  We can fix
> other hash MMU based CPUs in future if anyone cares enough.
> 
> Please review, and I'll fold into ppc-for-2.6 for my next pull.

Bother, somehow missed a serious bug in here that's causing
oops-on-boot.  Sorry, still tracking it down.

> 
> Changes since RFC:
>   * Moved lookup of SLB encodings table from SLB lookup time to SLB
>     store time
> 
> David Gibson (10):
>   target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub
>   target-ppc: Convert mmu-hash{32,64}.[ch] from CPUPPCState to
>     PowerPCCPU
>   target-ppc: Rework ppc_store_slb
>   target-ppc: Rework SLB page size lookup
>   target-ppc: Use actual page size encodings from HPTE
>   target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one
>   target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one()
>   target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
>   target-ppc: Helper to determine page size information from hpte alone
>   target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG
> 
>  hw/ppc/spapr_hcall.c        | 102 ++++------------
>  target-ppc/cpu.h            |   1 +
>  target-ppc/helper.h         |   1 +
>  target-ppc/kvm.c            |   2 +-
>  target-ppc/kvm_ppc.h        |   5 -
>  target-ppc/machine.c        |  20 ++++
>  target-ppc/mmu-hash32.c     |  68 ++++++-----
>  target-ppc/mmu-hash32.h     |  30 ++---
>  target-ppc/mmu-hash64.c     | 286 ++++++++++++++++++++++++++++++++------------
>  target-ppc/mmu-hash64.h     |  30 +++--
>  target-ppc/mmu_helper.c     |  59 ++++-----
>  target-ppc/translate.c      |   2 +-
>  target-ppc/translate_init.c |  32 +++++
>  13 files changed, 389 insertions(+), 249 deletions(-)
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
@ 2016-01-25 13:26   ` David Gibson
  2016-01-25 20:18   ` Alexander Graf
  1 sibling, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-25 13:26 UTC (permalink / raw)
  To: benh; +Cc: lvivier, thuth, aik, agraf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2741 bytes --]

On Mon, Jan 25, 2016 at 04:15:47PM +1100, David Gibson wrote:
> At present the 64-bit hash MMU code uses information from the SLB to
> determine the page size of a translation.  We do need that information to
> correctly look up the hash table.  However the MMU also allows a
> possibly larger page size to be encoded into the HPTE itself, which is used
> to populate the TLB.  At present qemu doesn't check that, and so doesn't
> support the MPSS "Multiple Page Size per Segment" feature.
> 
> This makes a start on allowing this, by adding an hpte_page_shift()
> function which looks up the page size of an HPTE.  We use this to validate
> page sizes encodings on faults, and populate the qemu TLB with larger
> page sizes when appropriate.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/mmu-hash64.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 70 insertions(+), 4 deletions(-)
> 
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 28ad361..bcad826 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -21,6 +21,7 @@
>  #include "exec/helper-proto.h"
>  #include "qemu/error-report.h"
>  #include "sysemu/kvm.h"
> +#include "qemu/error-report.h"
>  #include "kvm_ppc.h"
>  #include "mmu-hash64.h"
>  
> @@ -474,6 +475,43 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>      return pte_offset;
>  }
>  
> +static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
> +    uint64_t pte0, uint64_t pte1)
> +{
> +    int i;
> +
> +    if (!(pte0 & HPTE64_V_LARGE)) {
> +        if (sps->page_shift != 12) {
> +            /* 4kiB page in a non 4kiB segment */
> +            return 0;
> +        }
> +        /* Normal 4kiB page */
> +        return 12;
> +    }
> +
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const struct ppc_one_page_size *ps = &sps->enc[i];
> +        uint64_t mask;
> +
> +        if (!ps->page_shift) {
> +            break;
> +        }
> +
> +        if (ps->page_shift == 12) {
> +            /* L bit is set so this can't be a 4kiB page */
> +            continue;
> +        }
> +
> +        mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN;
> +
> +        if ((pte1 & mask) == ps->pte_enc) {

Gah.  This needs to be (ps->pte_enc << HPTE64_R_RPN_SHIFT) or
everything breaks.

I remember fixing this earlier, but somehow I managed to lose the fix
in both this posting and the previous one.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb David Gibson
@ 2016-01-25 19:22   ` Alexander Graf
  2016-01-27  0:04     ` David Gibson
  0 siblings, 1 reply; 24+ messages in thread
From: Alexander Graf @ 2016-01-25 19:22 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, thuth, qemu-ppc, qemu-devel



On 01/25/2016 06:15 AM, David Gibson wrote:
> ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
> Currently it takes two parameters, which contain values encoded as the
> register arguments to the slbmte instruction, one register contains the
> ESID portion of the SLBE and also the slot number, the other contains the
> VSID portion of the SLBE.
>
> We're shortly going to want to do some SLB updates from other code where
> it is more convenient to supply the slot number and ESID separately, so
> rework this function and its callers to work this way.
>
> As a bonus, this slightly simplifies the emulation of segment registers for
> when running a 32-bit OS on a 64-bit CPU.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   target-ppc/kvm.c        |  2 +-
>   target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
>   target-ppc/mmu-hash64.h |  3 ++-
>   target-ppc/mmu_helper.c | 14 +++++---------
>   4 files changed, 21 insertions(+), 22 deletions(-)
>
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 98d7ba6..18c7ba2 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
>                * Only restore valid entries
>                */
>               if (rb & SLB_ESID_V) {
> -                ppc_store_slb(cpu, rb, rs);
> +                ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs);
>               }
>           }
>   #endif
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 03e25fd..5a6d33b 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
>       }
>   }
>   
> -int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> +                  target_ulong esid, target_ulong vsid)
>   {
>       CPUPPCState *env = &cpu->env;
> -    int slot = rb & 0xfff;
>       ppc_slb_t *slb = &env->slb[slot];
>   
> -    if (rb & (0x1000 - env->slb_nr)) {
> -        return -1; /* Reserved bits set or slot too high */
> +    if (slot >= env->slb_nr) {
> +        return -1; /* Bad slot number */
> +    }
> +    if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) {
> +        return -1; /* Reserved bits set */
>       }
> -    if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
> +    if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
>           return -1; /* Bad segment size */
>       }
> -    if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
> +    if ((vsid & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
>           return -1; /* 1T segment on MMU that doesn't support it */
>       }
>   
> -    /* Mask out the slot number as we store the entry */
> -    slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V);
> -    slb->vsid = rs;
> +    slb->esid = esid;
> +    slb->vsid = vsid;
>   
>       LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
> -            " %016" PRIx64 "\n", __func__, slot, rb, rs,
> +            " %016" PRIx64 "\n", __func__, slot, esid, vsid,
>               slb->esid, slb->vsid);
>   
>       return 0;
> @@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
>   {
>       PowerPCCPU *cpu = ppc_env_get_cpu(env);
>   
> -    if (ppc_store_slb(cpu, rb, rs) < 0) {
> +    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs) < 0) {

This might truncate the esid to 32bits on 32bits hosts, no? Should be 
0xfffULL instead.

Alex

>           helper_raise_exception_err(env, POWERPC_EXCP_PROGRAM,
>                                      POWERPC_EXCP_INVAL);
>       }
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 6e3de7e..24fd2c4 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -6,7 +6,8 @@
>   #ifdef TARGET_PPC64
>   void ppc_hash64_check_page_sizes(PowerPCCPU *cpu, Error **errp);
>   void dump_slb(FILE *f, fprintf_function cpu_fprintf, PowerPCCPU *cpu);
> -int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs);
> +int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> +                  target_ulong esid, target_ulong vsid);
>   hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr);
>   int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong address, int rw,
>                                   int mmu_idx);
> diff --git a/target-ppc/mmu_helper.c b/target-ppc/mmu_helper.c
> index 0ab73bc..c040b17 100644
> --- a/target-ppc/mmu_helper.c
> +++ b/target-ppc/mmu_helper.c
> @@ -2088,21 +2088,17 @@ void helper_store_sr(CPUPPCState *env, target_ulong srnum, target_ulong value)
>               (int)srnum, value, env->sr[srnum]);
>   #if defined(TARGET_PPC64)
>       if (env->mmu_model & POWERPC_MMU_64) {
> -        uint64_t rb = 0, rs = 0;
> +        uint64_t esid, vsid;
>   
>           /* ESID = srnum */
> -        rb |= ((uint32_t)srnum & 0xf) << 28;
> -        /* Set the valid bit */
> -        rb |= SLB_ESID_V;
> -        /* Index = ESID */
> -        rb |= (uint32_t)srnum;
> +        esid = ((uint64_t)(srnum & 0xf) << 28) | SLB_ESID_V;
>   
>           /* VSID = VSID */
> -        rs |= (value & 0xfffffff) << 12;
> +        vsid = (value & 0xfffffff) << 12;
>           /* flags = flags */
> -        rs |= ((value >> 27) & 0xf) << 8;
> +        vsid |= ((value >> 27) & 0xf) << 8;
>   
> -        ppc_store_slb(cpu, rb, rs);
> +        ppc_store_slb(cpu, srnum, esid, vsid);
>       } else
>   #endif
>       if (env->sr[srnum] != value) {

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup David Gibson
@ 2016-01-25 19:38   ` Alexander Graf
  2016-01-27  0:59     ` David Gibson
  0 siblings, 1 reply; 24+ messages in thread
From: Alexander Graf @ 2016-01-25 19:38 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, thuth, qemu-ppc, qemu-devel



On 01/25/2016 06:15 AM, David Gibson wrote:
> Currently, the ppc_hash64_page_shift() function looks up a page size based
> on information in an SLB entry.  It open codes the bit translation for
> existing CPUs, however different CPU models can have different SLB
> encodings.  We already store those in the 'sps' table in CPUPPCState, but
> we don't currently enforce that that actually matches the logic in
> ppc_hash64_page_shift.
>
> This patch reworks lookup of page size from SLB in several ways:
>    * ppc_store_slb() will now fail (triggering an illegal instruction
>      exception) if given a bad SLB page size encoding
>    * On success ppc_store_slb() stores a pointer to the relevant entry in
>      the page size table in the SLB entry.  This is looked up directly from
>      the published table of page size encodings, so can't get out ot sync.
>    * ppc_hash64_htab_lookup() and others now use this precached page size
>      information rather than decoding the SLB values
>    * Adjust ppc_hash64_pte_raddr() to take a page shift directly
>      instead of an SLB entry.  We'll be wanting the flexibility shortly.
>    * Adjust ppc_hash64_pte_raddr() to take a page shift directly
>      instead of an SLB entry.  Both callers now have easy access to this,
>      and we'll need the flexibility shortly.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   target-ppc/cpu.h        |  1 +
>   target-ppc/machine.c    | 20 ++++++++++++++
>   target-ppc/mmu-hash64.c | 71 ++++++++++++++++++++++++++-----------------------
>   3 files changed, 59 insertions(+), 33 deletions(-)
>
> diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
> index 2bc96b4..0820390 100644
> --- a/target-ppc/cpu.h
> +++ b/target-ppc/cpu.h
> @@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t;
>   struct ppc_slb_t {
>       uint64_t esid;
>       uint64_t vsid;
> +    const struct ppc_one_seg_page_size *sps;
>   };
>   
>   #define MAX_SLB_ENTRIES         64
> diff --git a/target-ppc/machine.c b/target-ppc/machine.c
> index b61c060..ca62d3e 100644
> --- a/target-ppc/machine.c
> +++ b/target-ppc/machine.c
> @@ -2,6 +2,7 @@
>   #include "hw/boards.h"
>   #include "sysemu/kvm.h"
>   #include "helper_regs.h"
> +#include "mmu-hash64.h"
>   
>   static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
>   {
> @@ -352,11 +353,30 @@ static bool slb_needed(void *opaque)
>       return (cpu->env.mmu_model & POWERPC_MMU_64);
>   }
>   
> +static int slb_post_load(void *opaque, int version_id)
> +{
> +    PowerPCCPU *cpu = opaque;
> +    CPUPPCState *env = &cpu->env;
> +    int i;
> +
> +    /* We've pulled in the raw esid and vsid values from the migration
> +     * stream, but we need to recompute the page size pointers */
> +    for (i = 0; i < env->slb_nr; i++) {
> +        if (ppc_store_slb(cpu, i, env->slb[i].esid, env->slb[i].vsid) < 0) {
> +            /* Migration source had bad values in its SLB */
> +            return -1;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>   static const VMStateDescription vmstate_slb = {
>       .name = "cpu/slb",
>       .version_id = 1,
>       .minimum_version_id = 1,
>       .needed = slb_needed,
> +    .post_load = slb_post_load,
>       .fields = (VMStateField[]) {
>           VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
>           VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES),
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 5a6d33b..28ad361 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -19,6 +19,7 @@
>    */
>   #include "cpu.h"
>   #include "exec/helper-proto.h"
> +#include "qemu/error-report.h"
>   #include "sysemu/kvm.h"
>   #include "kvm_ppc.h"
>   #include "mmu-hash64.h"
> @@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
>   {
>       CPUPPCState *env = &cpu->env;
>       ppc_slb_t *slb = &env->slb[slot];
> +    const struct ppc_one_seg_page_size *sps = NULL;
> +    int i;
>   
>       if (slot >= env->slb_nr) {
>           return -1; /* Bad slot number */
> @@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
>           return -1; /* 1T segment on MMU that doesn't support it */
>       }
>   
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const struct ppc_one_seg_page_size *sps1 = &env->sps.sps[i];
> +
> +        if (!sps1->page_shift) {
> +            break;
> +        }
> +
> +        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> +            sps = sps1;
> +            break;
> +        }
> +    }
> +
> +    if (!sps) {
> +        error_report("Bad page size encoding in SLB store: slot "TARGET_FMT_lu
> +                     " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx,
> +                     slot, esid, vsid);
> +        return -1;
> +    }
> +
>       slb->esid = esid;
>       slb->vsid = vsid;
> +    slb->sps = sps;
>   
>       LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
>               " %016" PRIx64 "\n", __func__, slot, esid, vsid,
> @@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>       return -1;
>   }
>   
> -static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
> -{
> -    uint64_t epnshift;
> -
> -    /* Page size according to the SLB, which we use to generate the
> -     * EPN for hash table lookup..  When we implement more recent MMU
> -     * extensions this might be different from the actual page size
> -     * encoded in the PTE */
> -    if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_4K) {
> -        epnshift = TARGET_PAGE_BITS;
> -    } else if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_64K) {
> -        epnshift = TARGET_PAGE_BITS_64K;
> -    } else {
> -        epnshift = TARGET_PAGE_BITS_16M;
> -    }
> -    return epnshift;
> -}
> -
>   static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>                                        ppc_slb_t *slb, target_ulong eaddr,
>                                        ppc_hash_pte64_t *pte)
> @@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>       CPUPPCState *env = &cpu->env;
>       hwaddr pte_offset;
>       hwaddr hash;
> -    uint64_t vsid, epnshift, epnmask, epn, ptem;
> +    uint64_t vsid, epnmask, epn, ptem;
> +
> +    /* The SLB store path should prevent any bad page size encodings
> +     * getting in there, so: */
> +    assert(slb->sps);
>   
> -    epnshift = ppc_hash64_page_shift(slb);
> -    epnmask = ~((1ULL << epnshift) - 1);
> +    epnmask = ~((1ULL << slb->sps->page_shift) - 1);
>   
>       if (slb->vsid & SLB_VSID_B) {
>           /* 1TB segment */
>           vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T;
>           epn = (eaddr & ~SEGMENT_MASK_1T) & epnmask;
> -        hash = vsid ^ (vsid << 25) ^ (epn >> epnshift);
> +        hash = vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift);
>       } else {
>           /* 256M segment */
>           vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT;
>           epn = (eaddr & ~SEGMENT_MASK_256M) & epnmask;
> -        hash = vsid ^ (epn >> epnshift);
> +        hash = vsid ^ (epn >> slb->sps->page_shift);
>       }
>       ptem = (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) & HPTE64_V_AVPN);
>   
> @@ -465,17 +474,12 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>       return pte_offset;
>   }
>   
> -static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t pte,
> +static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
>                                      target_ulong eaddr)
>   {
> -    hwaddr mask;
> -    int target_page_bits;
> +    hwaddr mask = (1ULL << page_shift) - 1;
>       hwaddr rpn = pte.pte1 & HPTE64_R_RPN;
> -    /*
> -     * We support 4K, 64K and 16M now
> -     */
> -    target_page_bits = ppc_hash64_page_shift(slb);
> -    mask = (1ULL << target_page_bits) - 1;
> +
>       return (rpn & ~mask) | (eaddr & mask);
>   }

Isn't this basically deposit64 by now?

Alex

>   
> @@ -600,7 +604,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>   
>       /* 7. Determine the real address from the PTE */
>   
> -    raddr = ppc_hash64_pte_raddr(slb, pte, eaddr);
> +    raddr = ppc_hash64_pte_raddr(slb->sps->page_shift, pte, eaddr);
>   
>       tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
>                    prot, mmu_idx, TARGET_PAGE_SIZE);
> @@ -630,7 +634,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>           return -1;
>       }
>   
> -    return ppc_hash64_pte_raddr(slb, pte, addr) & TARGET_PAGE_MASK;
> +    return ppc_hash64_pte_raddr(slb->sps->page_shift, pte, addr)
> +        & TARGET_PAGE_MASK;
>   }
>   
>   void ppc_hash64_store_hpte(PowerPCCPU *cpu,

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE
  2016-01-25  5:15 ` [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
  2016-01-25 13:26   ` David Gibson
@ 2016-01-25 20:18   ` Alexander Graf
  2016-01-27  0:40     ` David Gibson
  1 sibling, 1 reply; 24+ messages in thread
From: Alexander Graf @ 2016-01-25 20:18 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, thuth, qemu-ppc, qemu-devel



On 01/25/2016 06:15 AM, David Gibson wrote:
> At present the 64-bit hash MMU code uses information from the SLB to
> determine the page size of a translation.  We do need that information to
> correctly look up the hash table.  However the MMU also allows a
> possibly larger page size to be encoded into the HPTE itself, which is used
> to populate the TLB.  At present qemu doesn't check that, and so doesn't
> support the MPSS "Multiple Page Size per Segment" feature.
>
> This makes a start on allowing this, by adding an hpte_page_shift()
> function which looks up the page size of an HPTE.  We use this to validate
> page sizes encodings on faults, and populate the qemu TLB with larger
> page sizes when appropriate.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   target-ppc/mmu-hash64.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 70 insertions(+), 4 deletions(-)
>
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 28ad361..bcad826 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -21,6 +21,7 @@
>   #include "exec/helper-proto.h"
>   #include "qemu/error-report.h"
>   #include "sysemu/kvm.h"
> +#include "qemu/error-report.h"
>   #include "kvm_ppc.h"
>   #include "mmu-hash64.h"
>   
> @@ -474,6 +475,43 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>       return pte_offset;
>   }
>   
> +static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
> +    uint64_t pte0, uint64_t pte1)
> +{
> +    int i;
> +
> +    if (!(pte0 & HPTE64_V_LARGE)) {
> +        if (sps->page_shift != 12) {
> +            /* 4kiB page in a non 4kiB segment */
> +            return 0;
> +        }
> +        /* Normal 4kiB page */
> +        return 12;
> +    }
> +
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const struct ppc_one_page_size *ps = &sps->enc[i];
> +        uint64_t mask;
> +
> +        if (!ps->page_shift) {
> +            break;
> +        }
> +
> +        if (ps->page_shift == 12) {
> +            /* L bit is set so this can't be a 4kiB page */
> +            continue;
> +        }
> +
> +        mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN;
> +
> +        if ((pte1 & mask) == ps->pte_enc) {
> +            return ps->page_shift;
> +        }
> +    }
> +
> +    return 0; /* Bad page size encoding */
> +}
> +
>   static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
>                                      target_ulong eaddr)
>   {
> @@ -489,6 +527,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>       CPUState *cs = CPU(cpu);
>       CPUPPCState *env = &cpu->env;
>       ppc_slb_t *slb;
> +    unsigned apshift;
>       hwaddr pte_offset;
>       ppc_hash_pte64_t pte;
>       int pp_prot, amr_prot, prot;
> @@ -552,6 +591,28 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
>       qemu_log_mask(CPU_LOG_MMU,
>                   "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset);
>   
> +    /* Validate page size encoding */
> +    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
> +    if (!apshift) {
> +        error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64
> +                     " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset);
> +        /* Treat it like a hash miss for the guest */
> +        if (rwx == 2) {
> +            cs->exception_index = POWERPC_EXCP_ISI;
> +            env->error_code = 0x40000000;
> +        } else {
> +            cs->exception_index = POWERPC_EXCP_DSI;
> +            env->error_code = 0;
> +            env->spr[SPR_DAR] = eaddr;
> +            if (rwx == 1) {
> +                env->spr[SPR_DSISR] = 0x42000000;
> +            } else {
> +                env->spr[SPR_DSISR] = 0x40000000;

I know that we don't do this for any other DSISR setting yet, but do you 
think we could mark the start here and use names for the bits instead? 
The kernel has a few nice defines.


Alex

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
  2016-01-25 11:10 ` [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
@ 2016-01-25 20:36   ` Alexander Graf
  2016-01-26  5:41     ` David Gibson
  0 siblings, 1 reply; 24+ messages in thread
From: Alexander Graf @ 2016-01-25 20:36 UTC (permalink / raw)
  To: David Gibson, benh; +Cc: aik, lvivier, thuth, qemu-ppc, qemu-devel



On 01/25/2016 12:10 PM, David Gibson wrote:
> On Mon, Jan 25, 2016 at 04:15:42PM +1100, David Gibson wrote:
>> Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
>> involving control bits in both the SLB and HPTE.  At present we
>> support a few of the options, but far fewer than real hardware.
>>
>> We're able to get away with that in practice, because guests use a
>> device tree property to determine which page sizes are available and
>> we are setting that to match.  However, the fact that the actual code
>> doesn't necessarily what we put into the table of available page sizes
>> is another ugliness.
>>
>> This series makes a number of cleanups to the page size handling.  The
>> upshot is that afterwards the softmmu code operates off the same page
>> size encoding table that is advertised to the guests, ensuring that
>> they will be in sync.
>>
>> Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
>> include the options allowed in hardware (including MPSS).  We can fix
>> other hash MMU based CPUs in future if anyone cares enough.
>>
>> Please review, and I'll fold into ppc-for-2.6 for my next pull.
> Bother, somehow missed a serious bug in here that's causing
> oops-on-boot.  Sorry, still tracking it down.

I still have no idea where your bug is (bisect probably should get you 
there pretty quick), but the overall concept sounds very reasonable to 
me. Please benchmark performance before and after in the next cover 
letter also :)

Reviewed-by: Alexander Graf <agraf@suse.de>

Alex

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
  2016-01-25 20:36   ` Alexander Graf
@ 2016-01-26  5:41     ` David Gibson
  2016-01-26  7:09       ` Alexander Graf
  0 siblings, 1 reply; 24+ messages in thread
From: David Gibson @ 2016-01-26  5:41 UTC (permalink / raw)
  To: Alexander Graf; +Cc: lvivier, thuth, aik, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2117 bytes --]

On Mon, Jan 25, 2016 at 09:36:40PM +0100, Alexander Graf wrote:
> 
> 
> On 01/25/2016 12:10 PM, David Gibson wrote:
> >On Mon, Jan 25, 2016 at 04:15:42PM +1100, David Gibson wrote:
> >>Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
> >>involving control bits in both the SLB and HPTE.  At present we
> >>support a few of the options, but far fewer than real hardware.
> >>
> >>We're able to get away with that in practice, because guests use a
> >>device tree property to determine which page sizes are available and
> >>we are setting that to match.  However, the fact that the actual code
> >>doesn't necessarily what we put into the table of available page sizes
> >>is another ugliness.
> >>
> >>This series makes a number of cleanups to the page size handling.  The
> >>upshot is that afterwards the softmmu code operates off the same page
> >>size encoding table that is advertised to the guests, ensuring that
> >>they will be in sync.
> >>
> >>Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
> >>include the options allowed in hardware (including MPSS).  We can fix
> >>other hash MMU based CPUs in future if anyone cares enough.
> >>
> >>Please review, and I'll fold into ppc-for-2.6 for my next pull.
> >Bother, somehow missed a serious bug in here that's causing
> >oops-on-boot.  Sorry, still tracking it down.
> 
> I still have no idea where your bug is (bisect probably should get you there
> pretty quick),

Alas, no, because the bug only triggers once all the page sizes are
added for POWER8 in the last patch.

Luckily I found it anyway (see earlier reply).

> but the overall concept sounds very reasonable to me. Please
> benchmark performance before and after in the next cover letter also
> :)

Hrm.. as always the question is what benchmark?

> 
> Reviewed-by: Alexander Graf <agraf@suse.de>
> 
> Alex
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG
  2016-01-26  5:41     ` David Gibson
@ 2016-01-26  7:09       ` Alexander Graf
  0 siblings, 0 replies; 24+ messages in thread
From: Alexander Graf @ 2016-01-26  7:09 UTC (permalink / raw)
  To: David Gibson; +Cc: lvivier, thuth, aik, qemu-devel, qemu-ppc



> Am 26.01.2016 um 06:41 schrieb David Gibson <david@gibson.dropbear.id.au>:
> 
>> On Mon, Jan 25, 2016 at 09:36:40PM +0100, Alexander Graf wrote:
>> 
>> 
>>> On 01/25/2016 12:10 PM, David Gibson wrote:
>>>> On Mon, Jan 25, 2016 at 04:15:42PM +1100, David Gibson wrote:
>>>> Encoding of page sizes on 64-bit hash MMUs for Power is rather arcane,
>>>> involving control bits in both the SLB and HPTE.  At present we
>>>> support a few of the options, but far fewer than real hardware.
>>>> 
>>>> We're able to get away with that in practice, because guests use a
>>>> device tree property to determine which page sizes are available and
>>>> we are setting that to match.  However, the fact that the actual code
>>>> doesn't necessarily what we put into the table of available page sizes
>>>> is another ugliness.
>>>> 
>>>> This series makes a number of cleanups to the page size handling.  The
>>>> upshot is that afterwards the softmmu code operates off the same page
>>>> size encoding table that is advertised to the guests, ensuring that
>>>> they will be in sync.
>>>> 
>>>> Finally, we extend the table of allowed sizes for POWER7 and POWER8 to
>>>> include the options allowed in hardware (including MPSS).  We can fix
>>>> other hash MMU based CPUs in future if anyone cares enough.
>>>> 
>>>> Please review, and I'll fold into ppc-for-2.6 for my next pull.
>>> Bother, somehow missed a serious bug in here that's causing
>>> oops-on-boot.  Sorry, still tracking it down.
>> 
>> I still have no idea where your bug is (bisect probably should get you there
>> pretty quick),
> 
> Alas, no, because the bug only triggers once all the page sizes are
> added for POWER8 in the last patch.
> 
> Luckily I found it anyway (see earlier reply).
> 
>> but the overall concept sounds very reasonable to me. Please
>> benchmark performance before and after in the next cover letter also
>> :)
> 
> Hrm.. as always the question is what benchmark?

For this "time" on a simple kernel boot+automatic shutdown sequence should be enough.

Alex

> 
>> 
>> Reviewed-by: Alexander Graf <agraf@suse.de>
>> 
>> Alex
> 
> -- 
> David Gibson            | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au    | minimalist, thank you.  NOT _the_ _other_
>                | _way_ _around_!
> http://www.ozlabs.org/~dgibson

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb
  2016-01-25 19:22   ` Alexander Graf
@ 2016-01-27  0:04     ` David Gibson
  2016-01-27  7:32       ` Thomas Huth
  0 siblings, 1 reply; 24+ messages in thread
From: David Gibson @ 2016-01-27  0:04 UTC (permalink / raw)
  To: Alexander Graf; +Cc: lvivier, thuth, aik, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 3970 bytes --]

On Mon, Jan 25, 2016 at 08:22:51PM +0100, Alexander Graf wrote:
> 
> 
> On 01/25/2016 06:15 AM, David Gibson wrote:
> >ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
> >Currently it takes two parameters, which contain values encoded as the
> >register arguments to the slbmte instruction, one register contains the
> >ESID portion of the SLBE and also the slot number, the other contains the
> >VSID portion of the SLBE.
> >
> >We're shortly going to want to do some SLB updates from other code where
> >it is more convenient to supply the slot number and ESID separately, so
> >rework this function and its callers to work this way.
> >
> >As a bonus, this slightly simplifies the emulation of segment registers for
> >when running a 32-bit OS on a 64-bit CPU.
> >
> >Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> >---
> >  target-ppc/kvm.c        |  2 +-
> >  target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
> >  target-ppc/mmu-hash64.h |  3 ++-
> >  target-ppc/mmu_helper.c | 14 +++++---------
> >  4 files changed, 21 insertions(+), 22 deletions(-)
> >
> >diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> >index 98d7ba6..18c7ba2 100644
> >--- a/target-ppc/kvm.c
> >+++ b/target-ppc/kvm.c
> >@@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs)
> >               * Only restore valid entries
> >               */
> >              if (rb & SLB_ESID_V) {
> >-                ppc_store_slb(cpu, rb, rs);
> >+                ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs);
> >              }
> >          }
> >  #endif
> >diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> >index 03e25fd..5a6d33b 100644
> >--- a/target-ppc/mmu-hash64.c
> >+++ b/target-ppc/mmu-hash64.c
> >@@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env, target_ulong addr)
> >      }
> >  }
> >-int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs)
> >+int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> >+                  target_ulong esid, target_ulong vsid)
> >  {
> >      CPUPPCState *env = &cpu->env;
> >-    int slot = rb & 0xfff;
> >      ppc_slb_t *slb = &env->slb[slot];
> >-    if (rb & (0x1000 - env->slb_nr)) {
> >-        return -1; /* Reserved bits set or slot too high */
> >+    if (slot >= env->slb_nr) {
> >+        return -1; /* Bad slot number */
> >+    }
> >+    if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) {
> >+        return -1; /* Reserved bits set */
> >      }
> >-    if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
> >+    if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) {
> >          return -1; /* Bad segment size */
> >      }
> >-    if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
> >+    if ((vsid & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) {
> >          return -1; /* 1T segment on MMU that doesn't support it */
> >      }
> >-    /* Mask out the slot number as we store the entry */
> >-    slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V);
> >-    slb->vsid = rs;
> >+    slb->esid = esid;
> >+    slb->vsid = vsid;
> >      LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
> >-            " %016" PRIx64 "\n", __func__, slot, rb, rs,
> >+            " %016" PRIx64 "\n", __func__, slot, esid, vsid,
> >              slb->esid, slb->vsid);
> >      return 0;
> >@@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
> >  {
> >      PowerPCCPU *cpu = ppc_env_get_cpu(env);
> >-    if (ppc_store_slb(cpu, rb, rs) < 0) {
> >+    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs) < 0) {
> 
> This might truncate the esid to 32bits on 32bits hosts, no? Should be
> 0xfffULL instead.

Good point, nice catch.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE
  2016-01-25 20:18   ` Alexander Graf
@ 2016-01-27  0:40     ` David Gibson
  0 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-27  0:40 UTC (permalink / raw)
  To: Alexander Graf; +Cc: lvivier, thuth, aik, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 4722 bytes --]

On Mon, Jan 25, 2016 at 09:18:18PM +0100, Alexander Graf wrote:
> 
> 
> On 01/25/2016 06:15 AM, David Gibson wrote:
> >At present the 64-bit hash MMU code uses information from the SLB to
> >determine the page size of a translation.  We do need that information to
> >correctly look up the hash table.  However the MMU also allows a
> >possibly larger page size to be encoded into the HPTE itself, which is used
> >to populate the TLB.  At present qemu doesn't check that, and so doesn't
> >support the MPSS "Multiple Page Size per Segment" feature.
> >
> >This makes a start on allowing this, by adding an hpte_page_shift()
> >function which looks up the page size of an HPTE.  We use this to validate
> >page sizes encodings on faults, and populate the qemu TLB with larger
> >page sizes when appropriate.
> >
> >Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> >---
> >  target-ppc/mmu-hash64.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 70 insertions(+), 4 deletions(-)
> >
> >diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> >index 28ad361..bcad826 100644
> >--- a/target-ppc/mmu-hash64.c
> >+++ b/target-ppc/mmu-hash64.c
> >@@ -21,6 +21,7 @@
> >  #include "exec/helper-proto.h"
> >  #include "qemu/error-report.h"
> >  #include "sysemu/kvm.h"
> >+#include "qemu/error-report.h"
> >  #include "kvm_ppc.h"
> >  #include "mmu-hash64.h"
> >@@ -474,6 +475,43 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >      return pte_offset;
> >  }
> >+static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps,
> >+    uint64_t pte0, uint64_t pte1)
> >+{
> >+    int i;
> >+
> >+    if (!(pte0 & HPTE64_V_LARGE)) {
> >+        if (sps->page_shift != 12) {
> >+            /* 4kiB page in a non 4kiB segment */
> >+            return 0;
> >+        }
> >+        /* Normal 4kiB page */
> >+        return 12;
> >+    }
> >+
> >+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> >+        const struct ppc_one_page_size *ps = &sps->enc[i];
> >+        uint64_t mask;
> >+
> >+        if (!ps->page_shift) {
> >+            break;
> >+        }
> >+
> >+        if (ps->page_shift == 12) {
> >+            /* L bit is set so this can't be a 4kiB page */
> >+            continue;
> >+        }
> >+
> >+        mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN;
> >+
> >+        if ((pte1 & mask) == ps->pte_enc) {
> >+            return ps->page_shift;
> >+        }
> >+    }
> >+
> >+    return 0; /* Bad page size encoding */
> >+}
> >+
> >  static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
> >                                     target_ulong eaddr)
> >  {
> >@@ -489,6 +527,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
> >      CPUState *cs = CPU(cpu);
> >      CPUPPCState *env = &cpu->env;
> >      ppc_slb_t *slb;
> >+    unsigned apshift;
> >      hwaddr pte_offset;
> >      ppc_hash_pte64_t pte;
> >      int pp_prot, amr_prot, prot;
> >@@ -552,6 +591,28 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr,
> >      qemu_log_mask(CPU_LOG_MMU,
> >                  "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset);
> >+    /* Validate page size encoding */
> >+    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
> >+    if (!apshift) {
> >+        error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64
> >+                     " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset);
> >+        /* Treat it like a hash miss for the guest */
> >+        if (rwx == 2) {
> >+            cs->exception_index = POWERPC_EXCP_ISI;
> >+            env->error_code = 0x40000000;
> >+        } else {
> >+            cs->exception_index = POWERPC_EXCP_DSI;
> >+            env->error_code = 0;
> >+            env->spr[SPR_DAR] = eaddr;
> >+            if (rwx == 1) {
> >+                env->spr[SPR_DSISR] = 0x42000000;
> >+            } else {
> >+                env->spr[SPR_DSISR] = 0x40000000;
> 
> I know that we don't do this for any other DSISR setting yet, but do you
> think we could mark the start here and use names for the bits instead? The
> kernel has a few nice defines.

So, that would be a bit odd, since it's just a copy of the exception
code from below.

But in fact, BenH pointed out that throwing a machine check is
probably more correct behaviour than a DSI or ISI, so this will go
away anyway.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup
  2016-01-25 19:38   ` Alexander Graf
@ 2016-01-27  0:59     ` David Gibson
  0 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-27  0:59 UTC (permalink / raw)
  To: Alexander Graf; +Cc: lvivier, thuth, aik, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 8691 bytes --]

On Mon, Jan 25, 2016 at 08:38:49PM +0100, Alexander Graf wrote:
> 
> 
> On 01/25/2016 06:15 AM, David Gibson wrote:
> >Currently, the ppc_hash64_page_shift() function looks up a page size based
> >on information in an SLB entry.  It open codes the bit translation for
> >existing CPUs, however different CPU models can have different SLB
> >encodings.  We already store those in the 'sps' table in CPUPPCState, but
> >we don't currently enforce that that actually matches the logic in
> >ppc_hash64_page_shift.
> >
> >This patch reworks lookup of page size from SLB in several ways:
> >   * ppc_store_slb() will now fail (triggering an illegal instruction
> >     exception) if given a bad SLB page size encoding
> >   * On success ppc_store_slb() stores a pointer to the relevant entry in
> >     the page size table in the SLB entry.  This is looked up directly from
> >     the published table of page size encodings, so can't get out ot sync.
> >   * ppc_hash64_htab_lookup() and others now use this precached page size
> >     information rather than decoding the SLB values
> >   * Adjust ppc_hash64_pte_raddr() to take a page shift directly
> >     instead of an SLB entry.  We'll be wanting the flexibility shortly.
> >   * Adjust ppc_hash64_pte_raddr() to take a page shift directly
> >     instead of an SLB entry.  Both callers now have easy access to this,
> >     and we'll need the flexibility shortly.
> >
> >Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> >---
> >  target-ppc/cpu.h        |  1 +
> >  target-ppc/machine.c    | 20 ++++++++++++++
> >  target-ppc/mmu-hash64.c | 71 ++++++++++++++++++++++++++-----------------------
> >  3 files changed, 59 insertions(+), 33 deletions(-)
> >
> >diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
> >index 2bc96b4..0820390 100644
> >--- a/target-ppc/cpu.h
> >+++ b/target-ppc/cpu.h
> >@@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t;
> >  struct ppc_slb_t {
> >      uint64_t esid;
> >      uint64_t vsid;
> >+    const struct ppc_one_seg_page_size *sps;
> >  };
> >  #define MAX_SLB_ENTRIES         64
> >diff --git a/target-ppc/machine.c b/target-ppc/machine.c
> >index b61c060..ca62d3e 100644
> >--- a/target-ppc/machine.c
> >+++ b/target-ppc/machine.c
> >@@ -2,6 +2,7 @@
> >  #include "hw/boards.h"
> >  #include "sysemu/kvm.h"
> >  #include "helper_regs.h"
> >+#include "mmu-hash64.h"
> >  static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
> >  {
> >@@ -352,11 +353,30 @@ static bool slb_needed(void *opaque)
> >      return (cpu->env.mmu_model & POWERPC_MMU_64);
> >  }
> >+static int slb_post_load(void *opaque, int version_id)
> >+{
> >+    PowerPCCPU *cpu = opaque;
> >+    CPUPPCState *env = &cpu->env;
> >+    int i;
> >+
> >+    /* We've pulled in the raw esid and vsid values from the migration
> >+     * stream, but we need to recompute the page size pointers */
> >+    for (i = 0; i < env->slb_nr; i++) {
> >+        if (ppc_store_slb(cpu, i, env->slb[i].esid, env->slb[i].vsid) < 0) {
> >+            /* Migration source had bad values in its SLB */
> >+            return -1;
> >+        }
> >+    }
> >+
> >+    return 0;
> >+}
> >+
> >  static const VMStateDescription vmstate_slb = {
> >      .name = "cpu/slb",
> >      .version_id = 1,
> >      .minimum_version_id = 1,
> >      .needed = slb_needed,
> >+    .post_load = slb_post_load,
> >      .fields = (VMStateField[]) {
> >          VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
> >          VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES),
> >diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> >index 5a6d33b..28ad361 100644
> >--- a/target-ppc/mmu-hash64.c
> >+++ b/target-ppc/mmu-hash64.c
> >@@ -19,6 +19,7 @@
> >   */
> >  #include "cpu.h"
> >  #include "exec/helper-proto.h"
> >+#include "qemu/error-report.h"
> >  #include "sysemu/kvm.h"
> >  #include "kvm_ppc.h"
> >  #include "mmu-hash64.h"
> >@@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> >  {
> >      CPUPPCState *env = &cpu->env;
> >      ppc_slb_t *slb = &env->slb[slot];
> >+    const struct ppc_one_seg_page_size *sps = NULL;
> >+    int i;
> >      if (slot >= env->slb_nr) {
> >          return -1; /* Bad slot number */
> >@@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> >          return -1; /* 1T segment on MMU that doesn't support it */
> >      }
> >+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> >+        const struct ppc_one_seg_page_size *sps1 = &env->sps.sps[i];
> >+
> >+        if (!sps1->page_shift) {
> >+            break;
> >+        }
> >+
> >+        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> >+            sps = sps1;
> >+            break;
> >+        }
> >+    }
> >+
> >+    if (!sps) {
> >+        error_report("Bad page size encoding in SLB store: slot "TARGET_FMT_lu
> >+                     " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx,
> >+                     slot, esid, vsid);
> >+        return -1;
> >+    }
> >+
> >      slb->esid = esid;
> >      slb->vsid = vsid;
> >+    slb->sps = sps;
> >      LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
> >              " %016" PRIx64 "\n", __func__, slot, esid, vsid,
> >@@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
> >      return -1;
> >  }
> >-static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
> >-{
> >-    uint64_t epnshift;
> >-
> >-    /* Page size according to the SLB, which we use to generate the
> >-     * EPN for hash table lookup..  When we implement more recent MMU
> >-     * extensions this might be different from the actual page size
> >-     * encoded in the PTE */
> >-    if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_4K) {
> >-        epnshift = TARGET_PAGE_BITS;
> >-    } else if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_64K) {
> >-        epnshift = TARGET_PAGE_BITS_64K;
> >-    } else {
> >-        epnshift = TARGET_PAGE_BITS_16M;
> >-    }
> >-    return epnshift;
> >-}
> >-
> >  static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >                                       ppc_slb_t *slb, target_ulong eaddr,
> >                                       ppc_hash_pte64_t *pte)
> >@@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >      CPUPPCState *env = &cpu->env;
> >      hwaddr pte_offset;
> >      hwaddr hash;
> >-    uint64_t vsid, epnshift, epnmask, epn, ptem;
> >+    uint64_t vsid, epnmask, epn, ptem;
> >+
> >+    /* The SLB store path should prevent any bad page size encodings
> >+     * getting in there, so: */
> >+    assert(slb->sps);
> >-    epnshift = ppc_hash64_page_shift(slb);
> >-    epnmask = ~((1ULL << epnshift) - 1);
> >+    epnmask = ~((1ULL << slb->sps->page_shift) - 1);
> >      if (slb->vsid & SLB_VSID_B) {
> >          /* 1TB segment */
> >          vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T;
> >          epn = (eaddr & ~SEGMENT_MASK_1T) & epnmask;
> >-        hash = vsid ^ (vsid << 25) ^ (epn >> epnshift);
> >+        hash = vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift);
> >      } else {
> >          /* 256M segment */
> >          vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT;
> >          epn = (eaddr & ~SEGMENT_MASK_256M) & epnmask;
> >-        hash = vsid ^ (epn >> epnshift);
> >+        hash = vsid ^ (epn >> slb->sps->page_shift);
> >      }
> >      ptem = (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) & HPTE64_V_AVPN);
> >@@ -465,17 +474,12 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >      return pte_offset;
> >  }
> >-static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t pte,
> >+static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
> >                                     target_ulong eaddr)
> >  {
> >-    hwaddr mask;
> >-    int target_page_bits;
> >+    hwaddr mask = (1ULL << page_shift) - 1;
> >      hwaddr rpn = pte.pte1 & HPTE64_R_RPN;
> >-    /*
> >-     * We support 4K, 64K and 16M now
> >-     */
> >-    target_page_bits = ppc_hash64_page_shift(slb);
> >-    mask = (1ULL << target_page_bits) - 1;
> >+
> >      return (rpn & ~mask) | (eaddr & mask);
> >  }
> 
> Isn't this basically deposit64 by now?

True.  I'll remove ppc_hash64_pte_raddr() entirely and use deposit64
instead.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb
  2016-01-27  0:04     ` David Gibson
@ 2016-01-27  7:32       ` Thomas Huth
  2016-01-27 10:07         ` David Gibson
  0 siblings, 1 reply; 24+ messages in thread
From: Thomas Huth @ 2016-01-27  7:32 UTC (permalink / raw)
  To: David Gibson, Alexander Graf; +Cc: aik, qemu-ppc, qemu-devel, lvivier

[-- Attachment #1: Type: text/plain, Size: 2040 bytes --]

On 27.01.2016 01:04, David Gibson wrote:
> On Mon, Jan 25, 2016 at 08:22:51PM +0100, Alexander Graf wrote:
>>
>>
>> On 01/25/2016 06:15 AM, David Gibson wrote:
>>> ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
>>> Currently it takes two parameters, which contain values encoded as the
>>> register arguments to the slbmte instruction, one register contains the
>>> ESID portion of the SLBE and also the slot number, the other contains the
>>> VSID portion of the SLBE.
>>>
>>> We're shortly going to want to do some SLB updates from other code where
>>> it is more convenient to supply the slot number and ESID separately, so
>>> rework this function and its callers to work this way.
>>>
>>> As a bonus, this slightly simplifies the emulation of segment registers for
>>> when running a 32-bit OS on a 64-bit CPU.
>>>
>>> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
>>> ---
>>>  target-ppc/kvm.c        |  2 +-
>>>  target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
>>>  target-ppc/mmu-hash64.h |  3 ++-
>>>  target-ppc/mmu_helper.c | 14 +++++---------
>>>  4 files changed, 21 insertions(+), 22 deletions(-)
...
>>> @@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
>>>  {
>>>      PowerPCCPU *cpu = ppc_env_get_cpu(env);
>>> -    if (ppc_store_slb(cpu, rb, rs) < 0) {
>>> +    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs) < 0) {
>>
>> This might truncate the esid to 32bits on 32bits hosts, no? Should be
>> 0xfffULL instead.
> 
> Good point, nice catch.

Are you sure that it is really needed? If I run the following test
program on my 64-bit system:

int main()
{
	unsigned long long ll = -1ULL;
	printf("%llx %llx\n", ll, ll & ~0xfff);
	return 0;
}

Then I get this output:

ffffffffffffffff fffffffffffff000

So it sounds like the value is sign-extended when it is cast to 64-bit.

However, if you respin this patch series anyway, then maybe better add
the ULL for clarity.

 Thomas



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb
  2016-01-27  7:32       ` Thomas Huth
@ 2016-01-27 10:07         ` David Gibson
  0 siblings, 0 replies; 24+ messages in thread
From: David Gibson @ 2016-01-27 10:07 UTC (permalink / raw)
  To: Thomas Huth; +Cc: lvivier, aik, Alexander Graf, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 2557 bytes --]

On Wed, Jan 27, 2016 at 08:32:25AM +0100, Thomas Huth wrote:
> On 27.01.2016 01:04, David Gibson wrote:
> > On Mon, Jan 25, 2016 at 08:22:51PM +0100, Alexander Graf wrote:
> >>
> >>
> >> On 01/25/2016 06:15 AM, David Gibson wrote:
> >>> ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
> >>> Currently it takes two parameters, which contain values encoded as the
> >>> register arguments to the slbmte instruction, one register contains the
> >>> ESID portion of the SLBE and also the slot number, the other contains the
> >>> VSID portion of the SLBE.
> >>>
> >>> We're shortly going to want to do some SLB updates from other code where
> >>> it is more convenient to supply the slot number and ESID separately, so
> >>> rework this function and its callers to work this way.
> >>>
> >>> As a bonus, this slightly simplifies the emulation of segment registers for
> >>> when running a 32-bit OS on a 64-bit CPU.
> >>>
> >>> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> >>> ---
> >>>  target-ppc/kvm.c        |  2 +-
> >>>  target-ppc/mmu-hash64.c | 24 +++++++++++++-----------
> >>>  target-ppc/mmu-hash64.h |  3 ++-
> >>>  target-ppc/mmu_helper.c | 14 +++++---------
> >>>  4 files changed, 21 insertions(+), 22 deletions(-)
> ...
> >>> @@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs)
> >>>  {
> >>>      PowerPCCPU *cpu = ppc_env_get_cpu(env);
> >>> -    if (ppc_store_slb(cpu, rb, rs) < 0) {
> >>> +    if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs) < 0) {
> >>
> >> This might truncate the esid to 32bits on 32bits hosts, no? Should be
> >> 0xfffULL instead.
> > 
> > Good point, nice catch.
> 
> Are you sure that it is really needed? If I run the following test
> program on my 64-bit system:
> 
> int main()
> {
> 	unsigned long long ll = -1ULL;
> 	printf("%llx %llx\n", ll, ll & ~0xfff);
> 	return 0;
> }
> 
> Then I get this output:
> 
> ffffffffffffffff fffffffffffff000
> 
> So it sounds like the value is sign-extended when it is cast to
> 64-bit.

Ah, yes.  It would be promoted to s64 and sign extended before being
promoted to u64.  Still that's a pretty subtle detail to be relying on.
 
> However, if you respin this patch series anyway, then maybe better add
> the ULL for clarity.

Already done.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-01-27 10:12 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb David Gibson
2016-01-25 19:22   ` Alexander Graf
2016-01-27  0:04     ` David Gibson
2016-01-27  7:32       ` Thomas Huth
2016-01-27 10:07         ` David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup David Gibson
2016-01-25 19:38   ` Alexander Graf
2016-01-27  0:59     ` David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
2016-01-25 13:26   ` David Gibson
2016-01-25 20:18   ` Alexander Graf
2016-01-27  0:40     ` David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
2016-01-25 11:10 ` [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
2016-01-25 20:36   ` Alexander Graf
2016-01-26  5:41     ` David Gibson
2016-01-26  7:09       ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.