All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/2] pnv: handle real mode addressing in HV mode
@ 2016-06-28  6:48 Cédric Le Goater
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support Cédric Le Goater
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation Cédric Le Goater
  0 siblings, 2 replies; 13+ messages in thread
From: Cédric Le Goater @ 2016-06-28  6:48 UTC (permalink / raw)
  To: David Gibson
  Cc: Benjamin Herrenschmidt, qemu-devel, qemu-ppc, Cedric Le Goater

Hello,

Here are two more patches which are prereq for PowerNV.

I have modified the code to fit the modifications done early 2016 on
the mmu. I haven't seen any breakage in the tests but this clearly
needs a closer look by experts.

Thanks,

C.


Benjamin Herrenschmidt (2):
  ppc: Add proper real mode translation support
  ppc: Fix 64K pages support in full emulation

 hw/ppc/spapr.c              |   7 ++
 target-ppc/cpu-qom.h        |   3 +
 target-ppc/mmu-hash64.c     | 185 ++++++++++++++++++++++++++++++++++++++------
 target-ppc/mmu-hash64.h     |   1 +
 target-ppc/translate_init.c |  32 +++++++-
 5 files changed, 201 insertions(+), 27 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support
  2016-06-28  6:48 [Qemu-devel] [PATCH 0/2] pnv: handle real mode addressing in HV mode Cédric Le Goater
@ 2016-06-28  6:48 ` Cédric Le Goater
  2016-06-29  2:41   ` David Gibson
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation Cédric Le Goater
  1 sibling, 1 reply; 13+ messages in thread
From: Cédric Le Goater @ 2016-06-28  6:48 UTC (permalink / raw)
  To: David Gibson
  Cc: Benjamin Herrenschmidt, qemu-devel, qemu-ppc, Cedric Le Goater

From: Benjamin Herrenschmidt <benh@kernel.crashing.org>

This adds proper support for translating real mode addresses based
on the combination of HV and LPCR bits. This handles HRMOR offset
for hypervisor real mode, and both RMA and VRMA modes for guest
real mode. PAPR mode adjusts the offsets appropriately to match the
RMA used in TCG, but we need to limit to the max supported by the
implementation (16G).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 hw/ppc/spapr.c              |   7 +++
 target-ppc/mmu-hash64.c     | 146 ++++++++++++++++++++++++++++++++++++++------
 target-ppc/mmu-hash64.h     |   1 +
 target-ppc/translate_init.c |  10 ++-
 4 files changed, 144 insertions(+), 20 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index d26b4c26ed10..53ab1f84fb11 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1770,6 +1770,13 @@ static void ppc_spapr_init(MachineState *machine)
             spapr->vrma_adjust = 1;
             spapr->rma_size = MIN(spapr->rma_size, 0x10000000);
         }
+
+        /* Actually we don't support unbounded RMA anymore since we
+         * added proper emulation of HV mode. The max we can get is
+         * 16G which also happens to be what we configure for PAPR
+         * mode so make sure we don't do anything bigger than that
+         */
+        spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);
     }
 
     if (spapr->rma_size > node0_size) {
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 6d6f26c92957..ed353b2d1539 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -653,13 +653,41 @@ static void ppc_hash64_set_dsi(CPUState *cs, CPUPPCState *env, uint64_t dar,
     env->error_code = 0;
 }
 
+static int64_t ppc_hash64_get_rmls(CPUPPCState *env)
+{
+    uint64_t lpcr = env->spr[SPR_LPCR];
+
+    /*
+     * This is the full 4 bits encoding of POWER8. Previous
+     * CPUs only support a subset of these but the filtering
+     * is done when writing LPCR
+     */
+    switch ((lpcr & LPCR_RMLS) >> LPCR_RMLS_SHIFT) {
+    case 0x8: /* 32MB */
+        return 0x2000000ull;
+    case 0x3: /* 64MB */
+        return 0x4000000ull;
+    case 0x7: /* 128MB */
+        return 0x8000000ull;
+    case 0x4: /* 256MB */
+        return 0x10000000ull;
+    case 0x2: /* 1GB */
+        return 0x40000000ull;
+    case 0x1: /* 16GB */
+        return 0x400000000ull;
+    default:
+        /* What to do here ??? */
+        return 0;
+    }
+}
 
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
                                 int rwx, int mmu_idx)
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    ppc_slb_t *slb;
+    ppc_slb_t *slb_ptr;
+    ppc_slb_t slb;
     unsigned apshift;
     hwaddr pte_offset;
     ppc_hash_pte64_t pte;
@@ -670,11 +698,53 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
 
     assert((rwx == 0) || (rwx == 1) || (rwx == 2));
 
+    /* Note on LPCR usage: 970 uses HID4, but our special variant
+     * of store_spr copies relevant fields into env->spr[SPR_LPCR].
+     * Similarily we filter unimplemented bits when storing into
+     * LPCR depending on the MMU version. This code can thus just
+     * use the LPCR "as-is".
+     */
+
     /* 1. Handle real mode accesses */
     if (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0))) {
-        /* Translation is off */
-        /* In real mode the top 4 effective address bits are ignored */
+        /* Translation is supposedly "off"  */
+        /* In real mode the top 4 effective address bits are (mostly) ignored */
         raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
+
+        /* In HV mode, add HRMOR if top EA bit is clear */
+        if (msr_hv) {
+            if (!(eaddr >> 63)) {
+                raddr |= env->spr[SPR_HRMOR];
+            }
+        } else {
+            /* Otherwise, check VPM for RMA vs VRMA */
+            if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+                uint32_t vrmasd;
+                /* VRMA, we make up an SLB entry */
+                slb.vsid = SLB_VSID_VRMA;
+                vrmasd = (env->spr[SPR_LPCR] & LPCR_VRMASD) >>
+                    LPCR_VRMASD_SHIFT;
+                slb.vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
+                slb.esid = SLB_ESID_V;
+                goto skip_slb;
+            }
+            /* RMA. Check bounds in RMLS */
+            if (raddr < ppc_hash64_get_rmls(env)) {
+                raddr |= env->spr[SPR_RMOR];
+            } else {
+                /* The access failed, generate the approriate interrupt */
+                if (rwx == 2) {
+                    ppc_hash64_set_isi(cs, env, 0x08000000);
+                } else {
+                    dsisr = 0x08000000;
+                    if (rwx == 1) {
+                        dsisr |= 0x02000000;
+                    }
+                    ppc_hash64_set_dsi(cs, env, eaddr, dsisr);
+                }
+                return 1;
+            }
+        }
         tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
                      PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx,
                      TARGET_PAGE_SIZE);
@@ -682,9 +752,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
     }
 
     /* 2. Translation is on, so look up the SLB */
-    slb = slb_lookup(cpu, eaddr);
-
-    if (!slb) {
+    slb_ptr = slb_lookup(cpu, eaddr);
+    if (!slb_ptr) {
         if (rwx == 2) {
             cs->exception_index = POWERPC_EXCP_ISEG;
             env->error_code = 0;
@@ -696,14 +765,29 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
         return 1;
     }
 
+    /* We grab a local copy because we can modify it (or get a
+     * pre-cooked one from the VRMA code
+     */
+    slb = *slb_ptr;
+
+    /* 2.5 Clamp L||LP in ISL mode */
+    if (env->spr[SPR_LPCR] & LPCR_ISL) {
+        slb.vsid &= ~SLB_VSID_LLP_MASK;
+    }
+
     /* 3. Check for segment level no-execute violation */
-    if ((rwx == 2) && (slb->vsid & SLB_VSID_N)) {
+    if ((rwx == 2) && (slb.vsid & SLB_VSID_N)) {
         ppc_hash64_set_isi(cs, env, 0x10000000);
         return 1;
     }
 
+    /* We go straight here for VRMA translations as none of the
+     * above applies in that case
+     */
+ skip_slb:
+
     /* 4. Locate the PTE in the hash table */
-    pte_offset = ppc_hash64_htab_lookup(cpu, slb, eaddr, &pte);
+    pte_offset = ppc_hash64_htab_lookup(cpu, &slb, eaddr, &pte);
     if (pte_offset == -1) {
         dsisr = 0x40000000;
         if (rwx == 2) {
@@ -720,7 +804,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
                 "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset);
 
     /* Validate page size encoding */
-    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
+    apshift = hpte_page_shift(slb.sps, pte.pte0, pte.pte1);
     if (!apshift) {
         error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64
                      " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset);
@@ -733,7 +817,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
 
     /* 5. Check access permissions */
 
-    pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
+    pp_prot = ppc_hash64_pte_prot(cpu, &slb, pte);
     amr_prot = ppc_hash64_amr_prot(cpu, pte);
     prot = pp_prot & amr_prot;
 
@@ -789,27 +873,51 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
 hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
 {
     CPUPPCState *env = &cpu->env;
-    ppc_slb_t *slb;
-    hwaddr pte_offset;
+    ppc_slb_t slb;
+    ppc_slb_t *slb_ptr;
+    hwaddr pte_offset, raddr;
     ppc_hash_pte64_t pte;
     unsigned apshift;
 
+    /* Handle real mode */
     if (msr_dr == 0) {
-        /* In real mode the top 4 effective address bits are ignored */
-        return addr & 0x0FFFFFFFFFFFFFFFULL;
-    }
+        raddr = addr & 0x0FFFFFFFFFFFFFFFULL;
 
-    slb = slb_lookup(cpu, addr);
-    if (!slb) {
+        /* In HV mode, add HRMOR if top EA bit is clear */
+        if (msr_hv & !(addr >> 63)) {
+            return raddr | env->spr[SPR_HRMOR];
+        }
+
+        /* Otherwise, check VPM for RMA vs VRMA */
+        if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+            uint32_t vrmasd;
+
+            /* VRMA, we make up an SLB entry */
+            slb.vsid = SLB_VSID_VRMA;
+            vrmasd = (env->spr[SPR_LPCR] & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
+            slb.vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
+            slb.esid = SLB_ESID_V;
+            goto skip_slb;
+        }
+        /* RMA. Check bounds in RMLS */
+        if (raddr < ppc_hash64_get_rmls(env)) {
+            return raddr | env->spr[SPR_RMOR];
+        }
         return -1;
     }
 
-    pte_offset = ppc_hash64_htab_lookup(cpu, slb, addr, &pte);
+    slb_ptr = slb_lookup(cpu, addr);
+    if (!slb_ptr) {
+        return -1;
+    }
+    slb = *slb_ptr;
+ skip_slb:
+    pte_offset = ppc_hash64_htab_lookup(cpu, &slb, addr, &pte);
     if (pte_offset == -1) {
         return -1;
     }
 
-    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
+    apshift = hpte_page_shift(slb.sps, pte.pte0, pte.pte1);
     if (!apshift) {
         return -1;
     }
diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
index 6423b9f791e7..13ad060cfefb 100644
--- a/target-ppc/mmu-hash64.h
+++ b/target-ppc/mmu-hash64.h
@@ -37,6 +37,7 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
 #define SLB_VSID_B_256M         0x0000000000000000ULL
 #define SLB_VSID_B_1T           0x4000000000000000ULL
 #define SLB_VSID_VSID           0x3FFFFFFFFFFFF000ULL
+#define SLB_VSID_VRMA           (0x0001FFFFFF000000ULL | SLB_VSID_B_1T)
 #define SLB_VSID_PTEM           (SLB_VSID_B | SLB_VSID_VSID)
 #define SLB_VSID_KS             0x0000000000000800ULL
 #define SLB_VSID_KP             0x0000000000000400ULL
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index 55d1bfac97c4..4820c0bc99fb 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -8791,11 +8791,19 @@ void cpu_ppc_set_papr(PowerPCCPU *cpu)
     /* Set emulated LPCR to not send interrupts to hypervisor. Note that
      * under KVM, the actual HW LPCR will be set differently by KVM itself,
      * the settings below ensure proper operations with TCG in absence of
-     * a real hypervisor
+     * a real hypervisor.
+     *
+     * Clearing VPM0 will also cause us to use RMOR in mmu-hash64.c for
+     * real mode accesses, which thankfully defaults to 0 and isn't
+     * accessible in guest mode.
      */
     lpcr->default_value &= ~(LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV);
     lpcr->default_value |= LPCR_LPES0 | LPCR_LPES1;
 
+    /* Set RMLS to the max (ie, 16G) */
+    lpcr->default_value &= ~LPCR_RMLS;
+    lpcr->default_value |= 1ull << LPCR_RMLS_SHIFT;
+
     /* P7 and P8 has slightly different PECE bits, mostly because P8 adds
      * bit 47 and 48 which are reserved on P7. Here we set them all, which
      * will work as expected for both implementations
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-28  6:48 [Qemu-devel] [PATCH 0/2] pnv: handle real mode addressing in HV mode Cédric Le Goater
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support Cédric Le Goater
@ 2016-06-28  6:48 ` Cédric Le Goater
  2016-06-29  2:22   ` David Gibson
  2016-06-30 10:56   ` Anton Blanchard
  1 sibling, 2 replies; 13+ messages in thread
From: Cédric Le Goater @ 2016-06-28  6:48 UTC (permalink / raw)
  To: David Gibson
  Cc: Benjamin Herrenschmidt, qemu-devel, qemu-ppc, Cedric Le Goater

From: Benjamin Herrenschmidt <benh@kernel.crashing.org>

We were always advertising only 4K & 16M. Additionally the code wasn't
properly matching the page size with the PTE content, which meant we
could potentially hit an incorrect PTE if the guest used multiple sizes.

Finally, honor the CPU capabilities when decoding the size from the SLB
so we don't try to use 64K pages on 970.

This still doesn't add support for MPSS (Multiple Page Sizes per Segment)

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: fixed checkpatch.pl errors
      commits 61a36c9b5a12 and 1114e712c998 reworked the hpte code
      doing insertion/removal in hw/ppc/spapr_hcall.c. The hunks
      modifying these areas were removed. ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target-ppc/cpu-qom.h        |  3 +++
 target-ppc/mmu-hash64.c     | 39 +++++++++++++++++++++++++++++++++++----
 target-ppc/translate_init.c | 22 +++++++++++++++++++---
 3 files changed, 57 insertions(+), 7 deletions(-)

diff --git a/target-ppc/cpu-qom.h b/target-ppc/cpu-qom.h
index 0fad2def0a94..286410502f6d 100644
--- a/target-ppc/cpu-qom.h
+++ b/target-ppc/cpu-qom.h
@@ -70,18 +70,21 @@ enum powerpc_mmu_t {
 #define POWERPC_MMU_64       0x00010000
 #define POWERPC_MMU_1TSEG    0x00020000
 #define POWERPC_MMU_AMR      0x00040000
+#define POWERPC_MMU_64K      0x00080000
     /* 64 bits PowerPC MMU                                     */
     POWERPC_MMU_64B        = POWERPC_MMU_64 | 0x00000001,
     /* Architecture 2.03 and later (has LPCR) */
     POWERPC_MMU_2_03       = POWERPC_MMU_64 | 0x00000002,
     /* Architecture 2.06 variant                               */
     POWERPC_MMU_2_06       = POWERPC_MMU_64 | POWERPC_MMU_1TSEG
+                             | POWERPC_MMU_64K
                              | POWERPC_MMU_AMR | 0x00000003,
     /* Architecture 2.06 "degraded" (no 1T segments)           */
     POWERPC_MMU_2_06a      = POWERPC_MMU_64 | POWERPC_MMU_AMR
                              | 0x00000003,
     /* Architecture 2.07 variant                               */
     POWERPC_MMU_2_07       = POWERPC_MMU_64 | POWERPC_MMU_1TSEG
+                             | POWERPC_MMU_64K
                              | POWERPC_MMU_AMR | 0x00000004,
     /* Architecture 2.07 "degraded" (no 1T segments)           */
     POWERPC_MMU_2_07a      = POWERPC_MMU_64 | POWERPC_MMU_AMR
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index ed353b2d1539..fa26ad2e875b 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -450,9 +450,31 @@ void ppc_hash64_stop_access(PowerPCCPU *cpu, uint64_t token)
     }
 }
 
+/* Returns the effective page shift or 0. MPSS isn't supported yet so
+ * this will always be the slb_pshift or 0
+ */
+static uint32_t ppc_hash64_pte_size_decode(uint64_t pte1, uint32_t slb_pshift)
+{
+    switch (slb_pshift) {
+    case 12:
+        return 12;
+    case 16:
+        if ((pte1 & 0xf000) == 0x1000) {
+            return 16;
+        }
+        return 0;
+    case 24:
+        if ((pte1 & 0xff000) == 0) {
+            return 24;
+        }
+        return 0;
+    }
+    return 0;
+}
+
 static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
-                                     bool secondary, target_ulong ptem,
-                                     ppc_hash_pte64_t *pte)
+                                     uint32_t slb_pshift, bool secondary,
+                                     target_ulong ptem, ppc_hash_pte64_t *pte)
 {
     CPUPPCState *env = &cpu->env;
     int i;
@@ -472,6 +494,13 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
         if ((pte0 & HPTE64_V_VALID)
             && (secondary == !!(pte0 & HPTE64_V_SECONDARY))
             && HPTE64_V_COMPARE(pte0, ptem)) {
+            uint32_t pshift = ppc_hash64_pte_size_decode(pte1, slb_pshift);
+            if (pshift == 0) {
+                continue;
+            }
+            /* We don't do anything with pshift yet as qemu TLB only deals
+             * with 4K pages anyway
+             */
             pte->pte0 = pte0;
             pte->pte1 = pte1;
             ppc_hash64_stop_access(cpu, token);
@@ -525,7 +554,8 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
             " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
             " hash=" TARGET_FMT_plx "\n",
             env->htab_base, env->htab_mask, vsid, ptem,  hash);
-    pte_offset = ppc_hash64_pteg_search(cpu, hash, 0, ptem, pte);
+    pte_offset = ppc_hash64_pteg_search(cpu, hash, slb->sps->page_shift,
+                                        0, ptem, pte);
 
     if (pte_offset == -1) {
         /* Secondary PTEG lookup */
@@ -535,7 +565,8 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
                 " hash=" TARGET_FMT_plx "\n", env->htab_base,
                 env->htab_mask, vsid, ptem, ~hash);
 
-        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, 1, ptem, pte);
+        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, slb->sps->page_shift, 1,
+                                            ptem, pte);
     }
 
     return pte_offset;
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index 4820c0bc99fb..d7860fd7f8ee 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -10301,8 +10301,8 @@ static void ppc_cpu_initfn(Object *obj)
     if (pcc->sps) {
         env->sps = *pcc->sps;
     } else if (env->mmu_model & POWERPC_MMU_64) {
-        /* Use default sets of page sizes */
-        static const struct ppc_segment_page_sizes defsps = {
+        /* Use default sets of page sizes. We don't support MPSS */
+        static const struct ppc_segment_page_sizes defsps_4k = {
             .sps = {
                 { .page_shift = 12, /* 4K */
                   .slb_enc = 0,
@@ -10314,7 +10314,23 @@ static void ppc_cpu_initfn(Object *obj)
                 },
             },
         };
-        env->sps = defsps;
+        static const struct ppc_segment_page_sizes defsps_64k = {
+            .sps = {
+                { .page_shift = 12, /* 4K */
+                  .slb_enc = 0,
+                  .enc = { { .page_shift = 12, .pte_enc = 0 } }
+                },
+                { .page_shift = 16, /* 64K */
+                  .slb_enc = 0x110,
+                  .enc = { { .page_shift = 16, .pte_enc = 1 } }
+                },
+                { .page_shift = 24, /* 16M */
+                  .slb_enc = 0x100,
+                  .enc = { { .page_shift = 24, .pte_enc = 0 } }
+                },
+            },
+        };
+        env->sps = (env->mmu_model & POWERPC_MMU_64K) ? defsps_64k : defsps_4k;
     }
 #endif /* defined(TARGET_PPC64) */
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation Cédric Le Goater
@ 2016-06-29  2:22   ` David Gibson
  2016-06-30 10:56   ` Anton Blanchard
  1 sibling, 0 replies; 13+ messages in thread
From: David Gibson @ 2016-06-29  2:22 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Benjamin Herrenschmidt, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 7476 bytes --]

On Tue, Jun 28, 2016 at 08:48:34AM +0200, Cédric Le Goater wrote:
> From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> 
> We were always advertising only 4K & 16M. Additionally the code wasn't
> properly matching the page size with the PTE content, which meant we
> could potentially hit an incorrect PTE if the guest used multiple sizes.
> 
> Finally, honor the CPU capabilities when decoding the size from the SLB
> so we don't try to use 64K pages on 970.
> 
> This still doesn't add support for MPSS (Multiple Page Sizes per Segment)
> 
> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> [clg: fixed checkpatch.pl errors
>       commits 61a36c9b5a12 and 1114e712c998 reworked the hpte code
>       doing insertion/removal in hw/ppc/spapr_hcall.c. The hunks
>       modifying these areas were removed. ]
> Signed-off-by: Cédric Le Goater <clg@kaod.org>

Applied to ppc-for-2.7.

> ---
>  target-ppc/cpu-qom.h        |  3 +++
>  target-ppc/mmu-hash64.c     | 39 +++++++++++++++++++++++++++++++++++----
>  target-ppc/translate_init.c | 22 +++++++++++++++++++---
>  3 files changed, 57 insertions(+), 7 deletions(-)
> 
> diff --git a/target-ppc/cpu-qom.h b/target-ppc/cpu-qom.h
> index 0fad2def0a94..286410502f6d 100644
> --- a/target-ppc/cpu-qom.h
> +++ b/target-ppc/cpu-qom.h
> @@ -70,18 +70,21 @@ enum powerpc_mmu_t {
>  #define POWERPC_MMU_64       0x00010000
>  #define POWERPC_MMU_1TSEG    0x00020000
>  #define POWERPC_MMU_AMR      0x00040000
> +#define POWERPC_MMU_64K      0x00080000
>      /* 64 bits PowerPC MMU                                     */
>      POWERPC_MMU_64B        = POWERPC_MMU_64 | 0x00000001,
>      /* Architecture 2.03 and later (has LPCR) */
>      POWERPC_MMU_2_03       = POWERPC_MMU_64 | 0x00000002,
>      /* Architecture 2.06 variant                               */
>      POWERPC_MMU_2_06       = POWERPC_MMU_64 | POWERPC_MMU_1TSEG
> +                             | POWERPC_MMU_64K
>                               | POWERPC_MMU_AMR | 0x00000003,
>      /* Architecture 2.06 "degraded" (no 1T segments)           */
>      POWERPC_MMU_2_06a      = POWERPC_MMU_64 | POWERPC_MMU_AMR
>                               | 0x00000003,
>      /* Architecture 2.07 variant                               */
>      POWERPC_MMU_2_07       = POWERPC_MMU_64 | POWERPC_MMU_1TSEG
> +                             | POWERPC_MMU_64K
>                               | POWERPC_MMU_AMR | 0x00000004,
>      /* Architecture 2.07 "degraded" (no 1T segments)           */
>      POWERPC_MMU_2_07a      = POWERPC_MMU_64 | POWERPC_MMU_AMR
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index ed353b2d1539..fa26ad2e875b 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -450,9 +450,31 @@ void ppc_hash64_stop_access(PowerPCCPU *cpu, uint64_t token)
>      }
>  }
>  
> +/* Returns the effective page shift or 0. MPSS isn't supported yet so
> + * this will always be the slb_pshift or 0
> + */
> +static uint32_t ppc_hash64_pte_size_decode(uint64_t pte1, uint32_t slb_pshift)
> +{
> +    switch (slb_pshift) {
> +    case 12:
> +        return 12;
> +    case 16:
> +        if ((pte1 & 0xf000) == 0x1000) {
> +            return 16;
> +        }
> +        return 0;
> +    case 24:
> +        if ((pte1 & 0xff000) == 0) {
> +            return 24;
> +        }
> +        return 0;
> +    }
> +    return 0;
> +}
> +
>  static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
> -                                     bool secondary, target_ulong ptem,
> -                                     ppc_hash_pte64_t *pte)
> +                                     uint32_t slb_pshift, bool secondary,
> +                                     target_ulong ptem, ppc_hash_pte64_t *pte)
>  {
>      CPUPPCState *env = &cpu->env;
>      int i;
> @@ -472,6 +494,13 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>          if ((pte0 & HPTE64_V_VALID)
>              && (secondary == !!(pte0 & HPTE64_V_SECONDARY))
>              && HPTE64_V_COMPARE(pte0, ptem)) {
> +            uint32_t pshift = ppc_hash64_pte_size_decode(pte1, slb_pshift);
> +            if (pshift == 0) {
> +                continue;
> +            }
> +            /* We don't do anything with pshift yet as qemu TLB only deals
> +             * with 4K pages anyway
> +             */
>              pte->pte0 = pte0;
>              pte->pte1 = pte1;
>              ppc_hash64_stop_access(cpu, token);
> @@ -525,7 +554,8 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>              " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
>              " hash=" TARGET_FMT_plx "\n",
>              env->htab_base, env->htab_mask, vsid, ptem,  hash);
> -    pte_offset = ppc_hash64_pteg_search(cpu, hash, 0, ptem, pte);
> +    pte_offset = ppc_hash64_pteg_search(cpu, hash, slb->sps->page_shift,
> +                                        0, ptem, pte);
>  
>      if (pte_offset == -1) {
>          /* Secondary PTEG lookup */
> @@ -535,7 +565,8 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
>                  " hash=" TARGET_FMT_plx "\n", env->htab_base,
>                  env->htab_mask, vsid, ptem, ~hash);
>  
> -        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, 1, ptem, pte);
> +        pte_offset = ppc_hash64_pteg_search(cpu, ~hash, slb->sps->page_shift, 1,
> +                                            ptem, pte);
>      }
>  
>      return pte_offset;
> diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
> index 4820c0bc99fb..d7860fd7f8ee 100644
> --- a/target-ppc/translate_init.c
> +++ b/target-ppc/translate_init.c
> @@ -10301,8 +10301,8 @@ static void ppc_cpu_initfn(Object *obj)
>      if (pcc->sps) {
>          env->sps = *pcc->sps;
>      } else if (env->mmu_model & POWERPC_MMU_64) {
> -        /* Use default sets of page sizes */
> -        static const struct ppc_segment_page_sizes defsps = {
> +        /* Use default sets of page sizes. We don't support MPSS */
> +        static const struct ppc_segment_page_sizes defsps_4k = {
>              .sps = {
>                  { .page_shift = 12, /* 4K */
>                    .slb_enc = 0,
> @@ -10314,7 +10314,23 @@ static void ppc_cpu_initfn(Object *obj)
>                  },
>              },
>          };
> -        env->sps = defsps;
> +        static const struct ppc_segment_page_sizes defsps_64k = {
> +            .sps = {
> +                { .page_shift = 12, /* 4K */
> +                  .slb_enc = 0,
> +                  .enc = { { .page_shift = 12, .pte_enc = 0 } }
> +                },
> +                { .page_shift = 16, /* 64K */
> +                  .slb_enc = 0x110,
> +                  .enc = { { .page_shift = 16, .pte_enc = 1 } }
> +                },
> +                { .page_shift = 24, /* 16M */
> +                  .slb_enc = 0x100,
> +                  .enc = { { .page_shift = 24, .pte_enc = 0 } }
> +                },
> +            },
> +        };
> +        env->sps = (env->mmu_model & POWERPC_MMU_64K) ? defsps_64k : defsps_4k;
>      }
>  #endif /* defined(TARGET_PPC64) */
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support Cédric Le Goater
@ 2016-06-29  2:41   ` David Gibson
  2016-06-29  2:59     ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 13+ messages in thread
From: David Gibson @ 2016-06-29  2:41 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Benjamin Herrenschmidt, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 12803 bytes --]

On Tue, Jun 28, 2016 at 08:48:33AM +0200, Cédric Le Goater wrote:
> From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> 
> This adds proper support for translating real mode addresses based
> on the combination of HV and LPCR bits. This handles HRMOR offset
> for hypervisor real mode, and both RMA and VRMA modes for guest
> real mode. PAPR mode adjusts the offsets appropriately to match the
> RMA used in TCG, but we need to limit to the max supported by the
> implementation (16G).
> 
> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> [clg: fixed checkpatch.pl errors ]
> Signed-off-by: Cédric Le Goater <clg@kaod.org>

This looks correct and I've applied it.  There are a couple of
possible cleanups which might be a good idea to follow up with though.


> ---
>  hw/ppc/spapr.c              |   7 +++
>  target-ppc/mmu-hash64.c     | 146 ++++++++++++++++++++++++++++++++++++++------
>  target-ppc/mmu-hash64.h     |   1 +
>  target-ppc/translate_init.c |  10 ++-
>  4 files changed, 144 insertions(+), 20 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index d26b4c26ed10..53ab1f84fb11 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1770,6 +1770,13 @@ static void ppc_spapr_init(MachineState *machine)
>              spapr->vrma_adjust = 1;
>              spapr->rma_size = MIN(spapr->rma_size, 0x10000000);
>          }
> +
> +        /* Actually we don't support unbounded RMA anymore since we
> +         * added proper emulation of HV mode. The max we can get is
> +         * 16G which also happens to be what we configure for PAPR
> +         * mode so make sure we don't do anything bigger than that
> +         */
> +        spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);

#1 - Instead of the various KVM / non-KVM cases here, it might be
simpler to just always clamp the RMA to 256MiB.

>      }
>  
>      if (spapr->rma_size > node0_size) {
> diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> index 6d6f26c92957..ed353b2d1539 100644
> --- a/target-ppc/mmu-hash64.c
> +++ b/target-ppc/mmu-hash64.c
> @@ -653,13 +653,41 @@ static void ppc_hash64_set_dsi(CPUState *cs, CPUPPCState *env, uint64_t dar,
>      env->error_code = 0;
>  }
>  
> +static int64_t ppc_hash64_get_rmls(CPUPPCState *env)
> +{
> +    uint64_t lpcr = env->spr[SPR_LPCR];
> +
> +    /*
> +     * This is the full 4 bits encoding of POWER8. Previous
> +     * CPUs only support a subset of these but the filtering
> +     * is done when writing LPCR
> +     */
> +    switch ((lpcr & LPCR_RMLS) >> LPCR_RMLS_SHIFT) {
> +    case 0x8: /* 32MB */
> +        return 0x2000000ull;
> +    case 0x3: /* 64MB */
> +        return 0x4000000ull;
> +    case 0x7: /* 128MB */
> +        return 0x8000000ull;
> +    case 0x4: /* 256MB */
> +        return 0x10000000ull;
> +    case 0x2: /* 1GB */
> +        return 0x40000000ull;
> +    case 0x1: /* 16GB */
> +        return 0x400000000ull;
> +    default:
> +        /* What to do here ??? */
> +        return 0;
> +    }
> +}
>  
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>                                  int rwx, int mmu_idx)
>  {
>      CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
> -    ppc_slb_t *slb;
> +    ppc_slb_t *slb_ptr;
> +    ppc_slb_t slb;
>      unsigned apshift;
>      hwaddr pte_offset;
>      ppc_hash_pte64_t pte;
> @@ -670,11 +698,53 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>  
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>  
> +    /* Note on LPCR usage: 970 uses HID4, but our special variant
> +     * of store_spr copies relevant fields into env->spr[SPR_LPCR].
> +     * Similarily we filter unimplemented bits when storing into
> +     * LPCR depending on the MMU version. This code can thus just
> +     * use the LPCR "as-is".
> +     */
> +
>      /* 1. Handle real mode accesses */
>      if (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0))) {
> -        /* Translation is off */
> -        /* In real mode the top 4 effective address bits are ignored */
> +        /* Translation is supposedly "off"  */
> +        /* In real mode the top 4 effective address bits are (mostly) ignored */
>          raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
> +
> +        /* In HV mode, add HRMOR if top EA bit is clear */
> +        if (msr_hv) {
> +            if (!(eaddr >> 63)) {
> +                raddr |= env->spr[SPR_HRMOR];
> +            }
> +        } else {
> +            /* Otherwise, check VPM for RMA vs VRMA */
> +            if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> +                uint32_t vrmasd;
> +                /* VRMA, we make up an SLB entry */
> +                slb.vsid = SLB_VSID_VRMA;
> +                vrmasd = (env->spr[SPR_LPCR] & LPCR_VRMASD) >>
> +                    LPCR_VRMASD_SHIFT;
> +                slb.vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
> +                slb.esid = SLB_ESID_V;
> +                goto skip_slb;
> +            }
> +            /* RMA. Check bounds in RMLS */
> +            if (raddr < ppc_hash64_get_rmls(env)) {
> +                raddr |= env->spr[SPR_RMOR];
> +            } else {
> +                /* The access failed, generate the approriate interrupt */
> +                if (rwx == 2) {
> +                    ppc_hash64_set_isi(cs, env, 0x08000000);
> +                } else {
> +                    dsisr = 0x08000000;
> +                    if (rwx == 1) {
> +                        dsisr |= 0x02000000;
> +                    }
> +                    ppc_hash64_set_dsi(cs, env, eaddr, dsisr);
> +                }
> +                return 1;
> +            }
> +        }
>          tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
>                       PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx,
>                       TARGET_PAGE_SIZE);
> @@ -682,9 +752,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>      }
>  
>      /* 2. Translation is on, so look up the SLB */
> -    slb = slb_lookup(cpu, eaddr);
> -
> -    if (!slb) {
> +    slb_ptr = slb_lookup(cpu, eaddr);
> +    if (!slb_ptr) {
>          if (rwx == 2) {
>              cs->exception_index = POWERPC_EXCP_ISEG;
>              env->error_code = 0;
> @@ -696,14 +765,29 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>          return 1;
>      }
>  
> +    /* We grab a local copy because we can modify it (or get a
> +     * pre-cooked one from the VRMA code
> +     */
> +    slb = *slb_ptr;
> +
> +    /* 2.5 Clamp L||LP in ISL mode */
> +    if (env->spr[SPR_LPCR] & LPCR_ISL) {
> +        slb.vsid &= ~SLB_VSID_LLP_MASK;
> +    }
> +
>      /* 3. Check for segment level no-execute violation */
> -    if ((rwx == 2) && (slb->vsid & SLB_VSID_N)) {
> +    if ((rwx == 2) && (slb.vsid & SLB_VSID_N)) {
>          ppc_hash64_set_isi(cs, env, 0x10000000);
>          return 1;
>      }
>  
> +    /* We go straight here for VRMA translations as none of the
> +     * above applies in that case
> +     */
> + skip_slb:
> +
>      /* 4. Locate the PTE in the hash table */
> -    pte_offset = ppc_hash64_htab_lookup(cpu, slb, eaddr, &pte);
> +    pte_offset = ppc_hash64_htab_lookup(cpu, &slb, eaddr, &pte);
>      if (pte_offset == -1) {
>          dsisr = 0x40000000;
>          if (rwx == 2) {
> @@ -720,7 +804,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>                  "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset);
>  
>      /* Validate page size encoding */
> -    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
> +    apshift = hpte_page_shift(slb.sps, pte.pte0, pte.pte1);
>      if (!apshift) {
>          error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64
>                       " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset);
> @@ -733,7 +817,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>  
>      /* 5. Check access permissions */
>  
> -    pp_prot = ppc_hash64_pte_prot(cpu, slb, pte);
> +    pp_prot = ppc_hash64_pte_prot(cpu, &slb, pte);
>      amr_prot = ppc_hash64_amr_prot(cpu, pte);
>      prot = pp_prot & amr_prot;
>  
> @@ -789,27 +873,51 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>  hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>  {
>      CPUPPCState *env = &cpu->env;
> -    ppc_slb_t *slb;
> -    hwaddr pte_offset;
> +    ppc_slb_t slb;
> +    ppc_slb_t *slb_ptr;
> +    hwaddr pte_offset, raddr;
>      ppc_hash_pte64_t pte;
>      unsigned apshift;
>  
> +    /* Handle real mode */
>      if (msr_dr == 0) {
> -        /* In real mode the top 4 effective address bits are ignored */
> -        return addr & 0x0FFFFFFFFFFFFFFFULL;
> -    }
> +        raddr = addr & 0x0FFFFFFFFFFFFFFFULL;
>  
> -    slb = slb_lookup(cpu, addr);
> -    if (!slb) {
> +        /* In HV mode, add HRMOR if top EA bit is clear */
> +        if (msr_hv & !(addr >> 63)) {
> +            return raddr | env->spr[SPR_HRMOR];
> +        }
> +
> +        /* Otherwise, check VPM for RMA vs VRMA */
> +        if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> +            uint32_t vrmasd;
> +
> +            /* VRMA, we make up an SLB entry */
> +            slb.vsid = SLB_VSID_VRMA;
> +            vrmasd = (env->spr[SPR_LPCR] & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> +            slb.vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
> +            slb.esid = SLB_ESID_V;
> +            goto skip_slb;
> +        }
> +        /* RMA. Check bounds in RMLS */
> +        if (raddr < ppc_hash64_get_rmls(env)) {
> +            return raddr | env->spr[SPR_RMOR];
> +        }

Now that the real-mode case is non-trivial, it would be nice if we
could factor out some of this logic from the fault and page_debug
cases into a common helper function.

>          return -1;
>      }
>  
> -    pte_offset = ppc_hash64_htab_lookup(cpu, slb, addr, &pte);
> +    slb_ptr = slb_lookup(cpu, addr);
> +    if (!slb_ptr) {
> +        return -1;
> +    }
> +    slb = *slb_ptr;
> + skip_slb:
> +    pte_offset = ppc_hash64_htab_lookup(cpu, &slb, addr, &pte);
>      if (pte_offset == -1) {
>          return -1;
>      }
>  
> -    apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1);
> +    apshift = hpte_page_shift(slb.sps, pte.pte0, pte.pte1);
>      if (!apshift) {
>          return -1;
>      }
> diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h
> index 6423b9f791e7..13ad060cfefb 100644
> --- a/target-ppc/mmu-hash64.h
> +++ b/target-ppc/mmu-hash64.h
> @@ -37,6 +37,7 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
>  #define SLB_VSID_B_256M         0x0000000000000000ULL
>  #define SLB_VSID_B_1T           0x4000000000000000ULL
>  #define SLB_VSID_VSID           0x3FFFFFFFFFFFF000ULL
> +#define SLB_VSID_VRMA           (0x0001FFFFFF000000ULL | SLB_VSID_B_1T)
>  #define SLB_VSID_PTEM           (SLB_VSID_B | SLB_VSID_VSID)
>  #define SLB_VSID_KS             0x0000000000000800ULL
>  #define SLB_VSID_KP             0x0000000000000400ULL
> diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
> index 55d1bfac97c4..4820c0bc99fb 100644
> --- a/target-ppc/translate_init.c
> +++ b/target-ppc/translate_init.c
> @@ -8791,11 +8791,19 @@ void cpu_ppc_set_papr(PowerPCCPU *cpu)
>      /* Set emulated LPCR to not send interrupts to hypervisor. Note that
>       * under KVM, the actual HW LPCR will be set differently by KVM itself,
>       * the settings below ensure proper operations with TCG in absence of
> -     * a real hypervisor
> +     * a real hypervisor.
> +     *
> +     * Clearing VPM0 will also cause us to use RMOR in mmu-hash64.c for
> +     * real mode accesses, which thankfully defaults to 0 and isn't
> +     * accessible in guest mode.
>       */
>      lpcr->default_value &= ~(LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV);
>      lpcr->default_value |= LPCR_LPES0 | LPCR_LPES1;
>  
> +    /* Set RMLS to the max (ie, 16G) */
> +    lpcr->default_value &= ~LPCR_RMLS;
> +    lpcr->default_value |= 1ull << LPCR_RMLS_SHIFT;
> +
>      /* P7 and P8 has slightly different PECE bits, mostly because P8 adds
>       * bit 47 and 48 which are reserved on P7. Here we set them all, which
>       * will work as expected for both implementations

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support
  2016-06-29  2:41   ` David Gibson
@ 2016-06-29  2:59     ` Benjamin Herrenschmidt
  2016-06-29  3:05       ` David Gibson
  0 siblings, 1 reply; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2016-06-29  2:59 UTC (permalink / raw)
  To: David Gibson, Cédric Le Goater; +Cc: qemu-devel, qemu-ppc

On Wed, 2016-06-29 at 12:41 +1000, David Gibson wrote:
> > +        /* Actually we don't support unbounded RMA anymore since
> we
> > +         * added proper emulation of HV mode. The max we can get
> is
> > +         * 16G which also happens to be what we configure for PAPR
> > +         * mode so make sure we don't do anything bigger than that
> > +         */
> > +        spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);
> 
> #1 - Instead of the various KVM / non-KVM cases here, it might be
> simpler to just always clamp the RMA to 256MiB.

That would be sad ... we benefit from having a larger RMA..

Cheers,
Ben.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support
  2016-06-29  2:59     ` Benjamin Herrenschmidt
@ 2016-06-29  3:05       ` David Gibson
  0 siblings, 0 replies; 13+ messages in thread
From: David Gibson @ 2016-06-29  3:05 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: Cédric Le Goater, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 968 bytes --]

On Wed, Jun 29, 2016 at 12:59:05PM +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2016-06-29 at 12:41 +1000, David Gibson wrote:
> > > +        /* Actually we don't support unbounded RMA anymore since
> > we
> > > +         * added proper emulation of HV mode. The max we can get
> > is
> > > +         * 16G which also happens to be what we configure for PAPR
> > > +         * mode so make sure we don't do anything bigger than that
> > > +         */
> > > +        spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);
> > 
> > #1 - Instead of the various KVM / non-KVM cases here, it might be
> > simpler to just always clamp the RMA to 256MiB.
> 
> That would be sad ... we benefit from having a larger RMA..

Ah, ok.  Let's leave it as is, then.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-28  6:48 ` [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation Cédric Le Goater
  2016-06-29  2:22   ` David Gibson
@ 2016-06-30 10:56   ` Anton Blanchard
  2016-06-30 11:08     ` Benjamin Herrenschmidt
  2016-06-30 16:01     ` Cédric Le Goater
  1 sibling, 2 replies; 13+ messages in thread
From: Anton Blanchard @ 2016-06-30 10:56 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: David Gibson, qemu-ppc, qemu-devel, Benjamin Herrenschmidt

Hi,

> From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> 
> We were always advertising only 4K & 16M. Additionally the code wasn't
> properly matching the page size with the PTE content, which meant we
> could potentially hit an incorrect PTE if the guest used multiple
> sizes.
> 
> Finally, honor the CPU capabilities when decoding the size from the
> SLB so we don't try to use 64K pages on 970.
> 
> This still doesn't add support for MPSS (Multiple Page Sizes per
> Segment)

This is causing issues booting an Ubuntu yakety cloud image. I'm
running on a ppc64le box (I don't think it reproduces on x86-64).

cat << EOF > my-user-data
#cloud-config
password: password
chpasswd: { expire: False }
ssh_pwauth: True
EOF

cloud-localds my-seed.img my-user-data

wget -N https://cloud-images.ubuntu.com/yakkety/current/yakkety-server-cloudimg-ppc64el.img

qemu-system-ppc64 -M pseries -cpu POWER8 -nographic -vga none -m 4G -drive file=test.img -drive file=my-seed.img -net user -net nic

The cloud-init scripts never finish, so the ubuntu user's
password is never updated. With the above cloud config you
should be able to log in with ubuntu/password.

Anton

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-30 10:56   ` Anton Blanchard
@ 2016-06-30 11:08     ` Benjamin Herrenschmidt
  2016-06-30 16:01     ` Cédric Le Goater
  1 sibling, 0 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2016-06-30 11:08 UTC (permalink / raw)
  To: Anton Blanchard, Cédric Le Goater; +Cc: David Gibson, qemu-ppc, qemu-devel

On Thu, 2016-06-30 at 20:56 +1000, Anton Blanchard wrote:
> Hi,
> 
> > From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > 
> > We were always advertising only 4K & 16M. Additionally the code wasn't
> > properly matching the page size with the PTE content, which meant we
> > could potentially hit an incorrect PTE if the guest used multiple
> > sizes.
> > 
> > Finally, honor the CPU capabilities when decoding the size from the
> > SLB so we don't try to use 64K pages on 970.
> > 
> > This still doesn't add support for MPSS (Multiple Page Sizes per
> > Segment)
> 
> This is causing issues booting an Ubuntu yakety cloud image. I'm
> running on a ppc64le box (I don't think it reproduces on x86-64).

I don't completely understand your repro instructions ... I'm surprised
there would be a difference here between ppc64le and x86_64 hosts... they
are both 64-bit LE hosts and the MMU stuff is host code, not JITed (well
there is JITed code for the qemu TLB lookups but that's always 4k).

Very strange ... I need to reproduce and see what the heck is doing.

Cheers,
Ben.

> #cloud-config
> password: password
> chpasswd: { expire: False }
> ssh_pwauth: True
> EOF
> 
> cloud-localds my-seed.img my-user-data
> 
> wget -N https://cloud-images.ubuntu.com/yakkety/current/yakkety-server-cloudimg-ppc64el.img
> 
> qemu-system-ppc64 -M pseries -cpu POWER8 -nographic -vga none -m 4G -drive file=test.img -drive file=my-seed.img -net user -net nic
> 
> The cloud-init scripts never finish, so the ubuntu user's
> password is never updated. With the above cloud config you
> should be able to log in with ubuntu/password.
> 
> Anton

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-30 10:56   ` Anton Blanchard
  2016-06-30 11:08     ` Benjamin Herrenschmidt
@ 2016-06-30 16:01     ` Cédric Le Goater
  2016-06-30 22:13       ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 13+ messages in thread
From: Cédric Le Goater @ 2016-06-30 16:01 UTC (permalink / raw)
  To: Anton Blanchard
  Cc: David Gibson, qemu-ppc, qemu-devel, Benjamin Herrenschmidt

On 06/30/2016 12:56 PM, Anton Blanchard wrote:
> Hi,
> 
>> From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>>
>> We were always advertising only 4K & 16M. Additionally the code wasn't
>> properly matching the page size with the PTE content, which meant we
>> could potentially hit an incorrect PTE if the guest used multiple
>> sizes.
>>
>> Finally, honor the CPU capabilities when decoding the size from the
>> SLB so we don't try to use 64K pages on 970.
>>
>> This still doesn't add support for MPSS (Multiple Page Sizes per
>> Segment)
> 
> This is causing issues booting an Ubuntu yakety cloud image. I'm
> running on a ppc64le box (I don't think it reproduces on x86-64).
> 
> cat << EOF > my-user-data
> #cloud-config
> password: password
> chpasswd: { expire: False }
> ssh_pwauth: True
> EOF
> 
> cloud-localds my-seed.img my-user-data
> 
> wget -N https://cloud-images.ubuntu.com/yakkety/current/yakkety-server-cloudimg-ppc64el.img
> 
> qemu-system-ppc64 -M pseries -cpu POWER8 -nographic -vga none -m 4G -drive file=test.img -drive file=my-seed.img -net user -net nic
> 
> The cloud-init scripts never finish, so the ubuntu user's
> password is never updated. With the above cloud config you
> should be able to log in with ubuntu/password.

The code I pushed was a little old. See below for a possible
fix (Ben, David, could you check ?)

With it, I could log in the image : 

	Ubuntu Yakkety Yak (development branch) ubuntu hvc0

	ubuntu login: [  164.177145] cloud-init[1890]: Generating locales (this might take a while)...
		[  175.653454] cloud-init[1890]:   en_US.UTF-8... done
	[  175.664475] cloud-init[1890]: Generation complete.
	[  184.064419] cloud-init[1890]: Cloud-init v. 0.7.7 running 'modules:config' at Thu, 30 Jun 2016 15:47:22 +0000. Up 158.38 seconds.
	ci-info: no authorized ssh keys fingerprints found for user ubuntu.
	<14>Jun 30 15:47:58 ec2: 
	<14>Jun 30 15:47:58 ec2: #############################################################
	<14>Jun 30 15:47:58 ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
	<14>Jun 30 15:47:58 ec2: 1024 SHA256:N8fPX8q8HY2aaIn2r9gf2X+6We3GNcDF+Nb/OO9JIks root@ubuntu (DSA)
	<14>Jun 30 15:47:59 ec2: 256 SHA256:3+9VDWNJ0w4L1RS5jTE3sfYBhMZ/nxS1qJwFZNismbQ root@ubuntu (ECDSA)
	<14>Jun 30 15:47:59 ec2: 256 SHA256:3YDyIYY3M5ThxmeEjn3ZW4GGq0xTony2W0u2pl43pDc root@ubuntu (ED25519)
	<14>Jun 30 15:48:00 ec2: 2048 SHA256:lwcJNspduOE7QOR48X6TudTJ5mj+8i3FhAAD7UzJAG0 root@ubuntu (RSA)
	<14>Jun 30 15:48:00 ec2: -----END SSH HOST KEY FINGERPRINTS-----
	<14>Jun 30 15:48:00 ec2: #############################################################
	-----BEGIN SSH HOST KEY KEYS-----
	ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxBntCKa5Po0v0eu5Vq6WWCfmTpK7enqvfo7UKZJZz5iXsSBu40yzqUJQVQsqJ4l9toLkaJlYCMlRWDbQ3X76Y= root@ubuntu
	ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO9Y6u+kQCKnX0jb0TyUpkmPOkjGFg8b7EbiHnJPfGBg root@ubuntu
	ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCr75VVowRslg0iDyf3rfJ2crE7H4tzstbmwDzXlJsVVzg4Xysfck6LKlNT8tvQwlocaHaiUYF3pbPCrpYypG2NzlQA9HT+KIdVb5NTeeLy7GdumK3DpWYBSGFdpiGdTCDfeZ9ny5YycAEopsdFJcDazhD/lZCGcYtGYjv+BUIjAtTO0GPHndmnJLZgDMxizTwaxAUb94qj/qyF2yO7fZq9QxgHVzwlwhfD3Wpl9PNH3ZjURMUAH1ExjX4jvkikrvVCW2Q38R6Pal3sgXexq/QdlwhzqCXOeuedo8BHEMmta2QiMoovAtuYLL41k7e2RBY4x8Pq4n4bnY6kPLp1neE3 root@ubuntu
	-----END SSH HOST KEY KEYS-----
	[  196.524382] cloud-init[2058]: Cloud-init v. 0.7.7 running 'modules:final' at Thu, 30 Jun 2016 15:47:55 +0000. Up 191.77 seconds.
	[  196.532336] cloud-init[2058]: ci-info: no authorized ssh keys fingerprints found for user ubuntu.
	[  196.540332] cloud-init[2058]: Cloud-init v. 0.7.7 finished at Thu, 30 Jun 2016 15:48:00 +0000. Datasource DataSourceNoCloud [seed=/dev/sdb][dsmode=net].  Up 196.26 seconds

	Ubuntu Yakkety Yak (development branch) ubuntu hvc0

	ubuntu login: ubuntu
	Password: 
	Welcome to Ubuntu Yakkety Yak (development branch) (GNU/Linux 4.4.0-28-generic ppc64le)
	...

Could you give it a try please ? If you hit this compile bug on ubuntu :

target-ppc/mmu-hash64.c:936:16: error: '*((void *)&slb+16)' may be used uninitialized in this function [-Werror=maybe-uninitialized]
     pte_offset = ppc_hash64_htab_lookup(cpu, &slb, addr, &pte);

That needs another fix. working on it. 

Thanks,

C. 


From: Cédric Le Goater <clg@kaod.org>
Subject: [PATCH] ppc: fix regression in large page support
Date: Thu, 30 Jun 2016 17:01:17 +0200
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

introduced by commit 53df75a59bcf ('ppc: Fix 64K pages support in full
emulation')

Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target-ppc/mmu-hash64.c |   24 +++++++-----------------
 1 file changed, 7 insertions(+), 17 deletions(-)

Index: qemu-dgibson-for-2.7.git/target-ppc/mmu-hash64.c
===================================================================
--- qemu-dgibson-for-2.7.git.orig/target-ppc/mmu-hash64.c
+++ qemu-dgibson-for-2.7.git/target-ppc/mmu-hash64.c
@@ -453,23 +453,12 @@ void ppc_hash64_stop_access(PowerPCCPU *
 /* Returns the effective page shift or 0. MPSS isn't supported yet so
  * this will always be the slb_pshift or 0
  */
-static uint32_t ppc_hash64_pte_size_decode(uint64_t pte1, uint32_t slb_pshift)
+static uint32_t ppc_hash64_pte_size_decode(PowerPCCPU *cpu, uint64_t pte0,
+                                           uint64_t pte1, uint32_t slb_pshift)
 {
-    switch (slb_pshift) {
-    case 12:
-        return 12;
-    case 16:
-        if ((pte1 & 0xf000) == 0x1000) {
-            return 16;
-        }
-        return 0;
-    case 24:
-        if ((pte1 & 0xff000) == 0) {
-            return 24;
-        }
-        return 0;
-    }
-    return 0;
+    unsigned spshift;
+
+    return ppc_hash64_hpte_page_shift_noslb(cpu, pte0, pte1, &spshift);
 }
 
 static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
@@ -494,7 +483,8 @@ static hwaddr ppc_hash64_pteg_search(Pow
         if ((pte0 & HPTE64_V_VALID)
             && (secondary == !!(pte0 & HPTE64_V_SECONDARY))
             && HPTE64_V_COMPARE(pte0, ptem)) {
-            uint32_t pshift = ppc_hash64_pte_size_decode(pte1, slb_pshift);
+            uint32_t pshift = ppc_hash64_pte_size_decode(cpu, pte0, pte1,
+                                                         slb_pshift);
             if (pshift == 0) {
                 continue;
             }

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-30 16:01     ` Cédric Le Goater
@ 2016-06-30 22:13       ` Benjamin Herrenschmidt
  2016-06-30 23:56         ` David Gibson
  2016-07-01  6:06         ` Cédric Le Goater
  0 siblings, 2 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2016-06-30 22:13 UTC (permalink / raw)
  To: Cédric Le Goater, Anton Blanchard; +Cc: David Gibson, qemu-ppc, qemu-devel

On Thu, 2016-06-30 at 18:01 +0200, Cédric Le Goater wrote:
> +static uint32_t ppc_hash64_pte_size_decode(PowerPCCPU *cpu, uint64_t
> pte0,
> +                                           uint64_t pte1, uint32_t
> slb_pshift)
>  {
> -    switch (slb_pshift) {
> -    case 12:
> -        return 12;
> -    case 16:
> -        if ((pte1 & 0xf000) == 0x1000) {
> -            return 16;
> -        }
> -        return 0;
> -    case 24:
> -        if ((pte1 & 0xff000) == 0) {
> -            return 24;
> -        }
> -        return 0;
> -    }
> -    return 0;
> +    unsigned spshift;
> +
> +    return ppc_hash64_hpte_page_shift_noslb(cpu, pte0, pte1,
> &spshift);
>  }

Why not call ppc_hash64_hpte_page_shift_noslb() directly from the call
site ? That or rename it to ppc_hash64_pte_size_decode :-)

Otherwise yes, your patch looks correct as in what
doesppc_hash64_hpte_page_shift_noslb() is definitely more correct than
what ppc_hash64_pte_size_decode() is doing.

Cheers,
Ben.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-30 22:13       ` Benjamin Herrenschmidt
@ 2016-06-30 23:56         ` David Gibson
  2016-07-01  6:06         ` Cédric Le Goater
  1 sibling, 0 replies; 13+ messages in thread
From: David Gibson @ 2016-06-30 23:56 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Cédric Le Goater, Anton Blanchard, qemu-ppc, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1424 bytes --]

On Fri, Jul 01, 2016 at 08:13:47AM +1000, Benjamin Herrenschmidt wrote:
> On Thu, 2016-06-30 at 18:01 +0200, Cédric Le Goater wrote:
> > +static uint32_t ppc_hash64_pte_size_decode(PowerPCCPU *cpu, uint64_t
> > pte0,
> > +                                           uint64_t pte1, uint32_t
> > slb_pshift)
> >  {
> > -    switch (slb_pshift) {
> > -    case 12:
> > -        return 12;
> > -    case 16:
> > -        if ((pte1 & 0xf000) == 0x1000) {
> > -            return 16;
> > -        }
> > -        return 0;
> > -    case 24:
> > -        if ((pte1 & 0xff000) == 0) {
> > -            return 24;
> > -        }
> > -        return 0;
> > -    }
> > -    return 0;
> > +    unsigned spshift;
> > +
> > +    return ppc_hash64_hpte_page_shift_noslb(cpu, pte0, pte1,
> > &spshift);
> >  }
> 
> Why not call ppc_hash64_hpte_page_shift_noslb() directly from the call
> site ? That or rename it to ppc_hash64_pte_size_decode :-)

Right.. that is the usage that ppc_hash64_hpte_page_shift_noslb() was
intended for.

> Otherwise yes, your patch looks correct as in what
> doesppc_hash64_hpte_page_shift_noslb() is definitely more correct than
> what ppc_hash64_pte_size_decode() is doing.
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation
  2016-06-30 22:13       ` Benjamin Herrenschmidt
  2016-06-30 23:56         ` David Gibson
@ 2016-07-01  6:06         ` Cédric Le Goater
  1 sibling, 0 replies; 13+ messages in thread
From: Cédric Le Goater @ 2016-07-01  6:06 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Anton Blanchard
  Cc: David Gibson, qemu-ppc, qemu-devel

On 07/01/2016 12:13 AM, Benjamin Herrenschmidt wrote:
> On Thu, 2016-06-30 at 18:01 +0200, Cédric Le Goater wrote:
>> +static uint32_t ppc_hash64_pte_size_decode(PowerPCCPU *cpu, uint64_t
>> pte0,
>> +                                           uint64_t pte1, uint32_t
>> slb_pshift)
>>  {
>> -    switch (slb_pshift) {
>> -    case 12:
>> -        return 12;
>> -    case 16:
>> -        if ((pte1 & 0xf000) == 0x1000) {
>> -            return 16;
>> -        }
>> -        return 0;
>> -    case 24:
>> -        if ((pte1 & 0xff000) == 0) {
>> -            return 24;
>> -        }
>> -        return 0;
>> -    }
>> -    return 0;
>> +    unsigned spshift;
>> +
>> +    return ppc_hash64_hpte_page_shift_noslb(cpu, pte0, pte1,
>> &spshift);
>>  }
> 
> Why not call ppc_hash64_hpte_page_shift_noslb() directly from the call
> site ? That or rename it to ppc_hash64_pte_size_decode :-)

yes, clearly :) but that segment page shift is bothering me.  

David, 

Do you think I can remove that parameter as it is never used or do you
have some plans for it ? 

> Otherwise yes, your patch looks correct as in what
> doesppc_hash64_hpte_page_shift_noslb() is definitely more correct than
> what ppc_hash64_pte_size_decode() is doing.

Thanks,

C.
 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-07-01  6:06 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-28  6:48 [Qemu-devel] [PATCH 0/2] pnv: handle real mode addressing in HV mode Cédric Le Goater
2016-06-28  6:48 ` [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support Cédric Le Goater
2016-06-29  2:41   ` David Gibson
2016-06-29  2:59     ` Benjamin Herrenschmidt
2016-06-29  3:05       ` David Gibson
2016-06-28  6:48 ` [Qemu-devel] [PATCH 2/2] ppc: Fix 64K pages support in full emulation Cédric Le Goater
2016-06-29  2:22   ` David Gibson
2016-06-30 10:56   ` Anton Blanchard
2016-06-30 11:08     ` Benjamin Herrenschmidt
2016-06-30 16:01     ` Cédric Le Goater
2016-06-30 22:13       ` Benjamin Herrenschmidt
2016-06-30 23:56         ` David Gibson
2016-07-01  6:06         ` Cédric Le Goater

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.