All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling
@ 2020-03-03  3:43 David Gibson
  2020-03-03  3:43 ` [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode David Gibson
                   ` (16 more replies)
  0 siblings, 17 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

POWER "book S" (server class) cpus have a concept of "real mode" where
MMU translation is disabled... sort of.  In fact this can mean a bunch
of slightly different things when hypervisor mode and other
considerations are present.

We had some errors in edge cases here, so clean some things up and
correct them.

Some of those limitations caused problems with calculating the size of
the Real Mode Area of pseries guests, so continue on to clean up and
correct those calculations as well.

Changes since v6:
 * Removed a no-longer-meaningful comment
 * Reworked RMLS handling for brevity and clarity
 * Fix some earlier rebase damage that mixed hunks between some patches
 * Fix a compile error mid-series
 * Fix a bug caused by patch re-ordering
 * Fixed some bugs in RMA calculation due to muddling local and global
   copies of the rma size
Changes since v5:
 * Fixed an error in the sense of a test (pointed out by Fabiano Rosas)
Changes since v4:
 * Some tiny cosmetic fixes to the original patches
 * Added a bunch of extra patches correcting RMA calculation
Changes since v3:
 * Fix style errors reported by checkpatch
Changes since v2:
 * Removed 32-bit hypervisor stubs more completely
 * Minor polish based on review comments
Changes since RFCv1:
 * Add a number of extra patches taking advantage of the initial
   cleanups

David Gibson (17):
  ppc: Remove stub support for 32-bit hypervisor mode
  ppc: Remove stub of PPC970 HID4 implementation
  target/ppc: Correct handling of real mode accesses with vhyp on hash
    MMU
  target/ppc: Introduce ppc_hash64_use_vrma() helper
  spapr, ppc: Remove VPM0/RMLS hacks for POWER9
  target/ppc: Remove RMOR register from POWER9 & POWER10
  target/ppc: Use class fields to simplify LPCR masking
  target/ppc: Streamline calculation of RMA limit from LPCR[RMLS]
  target/ppc: Correct RMLS table
  target/ppc: Only calculate RMLS derived RMA limit on demand
  target/ppc: Don't store VRMA SLBE persistently
  spapr: Don't use weird units for MIN_RMA_SLOF
  spapr,ppc: Simplify signature of kvmppc_rma_size()
  spapr: Don't attempt to clamp RMA to VRMA constraint
  spapr: Don't clamp RMA to 16GiB on new machine types
  spapr: Clean up RMA size calculation
  spapr: Fold spapr_node0_size() into its only caller

 hw/ppc/spapr.c                  | 125 +++++++------
 hw/ppc/spapr_cpu_core.c         |  10 +-
 hw/ppc/spapr_hcall.c            |   4 +-
 include/hw/ppc/spapr.h          |   4 +-
 target/ppc/cpu-qom.h            |   1 +
 target/ppc/cpu.h                |  25 +--
 target/ppc/kvm.c                |   5 +-
 target/ppc/kvm_ppc.h            |   7 +-
 target/ppc/mmu-hash64.c         | 319 ++++++++++++--------------------
 target/ppc/translate_init.inc.c |  63 ++++---
 10 files changed, 246 insertions(+), 317 deletions(-)

-- 
2.24.1



^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-05  8:58   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 02/17] ppc: Remove stub of PPC970 HID4 implementation David Gibson
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

a4f30719a8cd, way back in 2007 noted that "PowerPC hypervisor mode is not
fundamentally available only for PowerPC 64" and added a 32-bit version
of the MSR[HV] bit.

But nothing was ever really done with that; there is no meaningful support
for 32-bit hypervisor mode 13 years later.  Let's stop pretending and just
remove the stubs.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/cpu.h                | 21 +++++++--------------
 target/ppc/translate_init.inc.c |  6 +++---
 2 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index b283042515..8077fdb068 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -24,8 +24,6 @@
 #include "exec/cpu-defs.h"
 #include "cpu-qom.h"
 
-/* #define PPC_EMULATE_32BITS_HYPV */
-
 #define TCG_GUEST_DEFAULT_MO 0
 
 #define TARGET_PAGE_BITS_64K 16
@@ -300,13 +298,12 @@ typedef struct ppc_v3_pate_t {
 #define MSR_SF   63 /* Sixty-four-bit mode                            hflags */
 #define MSR_TAG  62 /* Tag-active mode (POWERx ?)                            */
 #define MSR_ISF  61 /* Sixty-four-bit interrupt mode on 630                  */
-#define MSR_SHV  60 /* hypervisor state                               hflags */
+#define MSR_HV   60 /* hypervisor state                               hflags */
 #define MSR_TS0  34 /* Transactional state, 2 bits (Book3s)                  */
 #define MSR_TS1  33
 #define MSR_TM   32 /* Transactional Memory Available (Book3s)               */
 #define MSR_CM   31 /* Computation mode for BookE                     hflags */
 #define MSR_ICM  30 /* Interrupt computation mode for BookE                  */
-#define MSR_THV  29 /* hypervisor state for 32 bits PowerPC           hflags */
 #define MSR_GS   28 /* guest state for BookE                                 */
 #define MSR_UCLE 26 /* User-mode cache lock enable for BookE                 */
 #define MSR_VR   25 /* altivec available                            x hflags */
@@ -401,10 +398,13 @@ typedef struct ppc_v3_pate_t {
 
 #define msr_sf   ((env->msr >> MSR_SF)   & 1)
 #define msr_isf  ((env->msr >> MSR_ISF)  & 1)
-#define msr_shv  ((env->msr >> MSR_SHV)  & 1)
+#if defined(TARGET_PPC64)
+#define msr_hv   ((env->msr >> MSR_HV)   & 1)
+#else
+#define msr_hv   (0)
+#endif
 #define msr_cm   ((env->msr >> MSR_CM)   & 1)
 #define msr_icm  ((env->msr >> MSR_ICM)  & 1)
-#define msr_thv  ((env->msr >> MSR_THV)  & 1)
 #define msr_gs   ((env->msr >> MSR_GS)   & 1)
 #define msr_ucle ((env->msr >> MSR_UCLE) & 1)
 #define msr_vr   ((env->msr >> MSR_VR)   & 1)
@@ -449,16 +449,9 @@ typedef struct ppc_v3_pate_t {
 
 /* Hypervisor bit is more specific */
 #if defined(TARGET_PPC64)
-#define MSR_HVB (1ULL << MSR_SHV)
-#define msr_hv  msr_shv
-#else
-#if defined(PPC_EMULATE_32BITS_HYPV)
-#define MSR_HVB (1ULL << MSR_THV)
-#define msr_hv  msr_thv
+#define MSR_HVB (1ULL << MSR_HV)
 #else
 #define MSR_HVB (0ULL)
-#define msr_hv  (0)
-#endif
 #endif
 
 /* DSISR */
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 2f7125c51f..df3401cf06 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8764,7 +8764,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
                         PPC2_ISA205 | PPC2_ISA207S | PPC2_FP_CVT_S64 |
                         PPC2_TM | PPC2_PM_ISA206;
     pcc->msr_mask = (1ull << MSR_SF) |
-                    (1ull << MSR_SHV) |
+                    (1ull << MSR_HV) |
                     (1ull << MSR_TM) |
                     (1ull << MSR_VR) |
                     (1ull << MSR_VSX) |
@@ -8976,7 +8976,7 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
                         PPC2_ISA205 | PPC2_ISA207S | PPC2_FP_CVT_S64 |
                         PPC2_TM | PPC2_ISA300 | PPC2_PRCNTL;
     pcc->msr_mask = (1ull << MSR_SF) |
-                    (1ull << MSR_SHV) |
+                    (1ull << MSR_HV) |
                     (1ull << MSR_TM) |
                     (1ull << MSR_VR) |
                     (1ull << MSR_VSX) |
@@ -9186,7 +9186,7 @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
                         PPC2_ISA205 | PPC2_ISA207S | PPC2_FP_CVT_S64 |
                         PPC2_TM | PPC2_ISA300 | PPC2_PRCNTL;
     pcc->msr_mask = (1ull << MSR_SF) |
-                    (1ull << MSR_SHV) |
+                    (1ull << MSR_HV) |
                     (1ull << MSR_TM) |
                     (1ull << MSR_VR) |
                     (1ull << MSR_VSX) |
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 02/17] ppc: Remove stub of PPC970 HID4 implementation
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
  2020-03-03  3:43 ` [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  3:43 ` [PATCH v7 03/17] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU David Gibson
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

The PowerPC 970 CPU was a cut-down POWER4, which had hypervisor capability.
However, it can be (and often was) strapped into "Apple mode", where the
hypervisor capabilities were disabled (essentially putting it always in
hypervisor mode).

That's actually the only mode of the 970 we support in qemu, and we're
unlikely to change that any time soon.  However, we do have a partial
implementation of the 970's HID4 register which affects things only
relevant for hypervisor mode.

That stub is also really ugly, since it attempts to duplicate the effects
of HID4 by re-encoding it into the LPCR register used in newer CPUs, but
in a really confusing way.

Just get rid of it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/mmu-hash64.c         | 29 +----------------------------
 target/ppc/translate_init.inc.c | 20 ++++++++------------
 2 files changed, 9 insertions(+), 40 deletions(-)

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index da8966ccf5..3e0be4d55f 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -1091,33 +1091,6 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
 
     /* Filter out bits */
     switch (env->mmu_model) {
-    case POWERPC_MMU_64B: /* 970 */
-        if (val & 0x40) {
-            lpcr |= LPCR_LPES0;
-        }
-        if (val & 0x8000000000000000ull) {
-            lpcr |= LPCR_LPES1;
-        }
-        if (val & 0x20) {
-            lpcr |= (0x4ull << LPCR_RMLS_SHIFT);
-        }
-        if (val & 0x4000000000000000ull) {
-            lpcr |= (0x2ull << LPCR_RMLS_SHIFT);
-        }
-        if (val & 0x2000000000000000ull) {
-            lpcr |= (0x1ull << LPCR_RMLS_SHIFT);
-        }
-        env->spr[SPR_RMOR] = ((lpcr >> 41) & 0xffffull) << 26;
-
-        /*
-         * XXX We could also write LPID from HID4 here
-         * but since we don't tag any translation on it
-         * it doesn't actually matter
-         *
-         * XXX For proper emulation of 970 we also need
-         * to dig HRMOR out of HID5
-         */
-        break;
     case POWERPC_MMU_2_03: /* P5p */
         lpcr = val & (LPCR_RMLS | LPCR_ILE |
                       LPCR_LPES0 | LPCR_LPES1 |
@@ -1154,7 +1127,7 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
         }
         break;
     default:
-        ;
+        g_assert_not_reached();
     }
     env->spr[SPR_LPCR] = lpcr;
     ppc_hash64_update_rmls(cpu);
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index df3401cf06..aecad96db3 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -7895,25 +7895,21 @@ static void spr_write_lpcr(DisasContext *ctx, int sprn, int gprn)
 {
     gen_helper_store_lpcr(cpu_env, cpu_gpr[gprn]);
 }
-
-static void spr_write_970_hid4(DisasContext *ctx, int sprn, int gprn)
-{
-#if defined(TARGET_PPC64)
-    spr_write_generic(ctx, sprn, gprn);
-    gen_helper_store_lpcr(cpu_env, cpu_gpr[gprn]);
-#endif
-}
-
 #endif /* !defined(CONFIG_USER_ONLY) */
 
 static void gen_spr_970_lpar(CPUPPCState *env)
 {
 #if !defined(CONFIG_USER_ONLY)
-    /* Logical partitionning */
-    /* PPC970: HID4 is effectively the LPCR */
+    /*
+     * PPC970: HID4 covers things later controlled by the LPCR and
+     * RMOR in later CPUs, but with a different encoding.  We only
+     * support the 970 in "Apple mode" which has all hypervisor
+     * facilities disabled by strapping, so we can basically just
+     * ignore it
+     */
     spr_register(env, SPR_970_HID4, "HID4",
                  SPR_NOACCESS, SPR_NOACCESS,
-                 &spr_read_generic, &spr_write_970_hid4,
+                 &spr_read_generic, &spr_write_generic,
                  0x00000000);
 #endif
 }
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 03/17] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
  2020-03-03  3:43 ` [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode David Gibson
  2020-03-03  3:43 ` [PATCH v7 02/17] ppc: Remove stub of PPC970 HID4 implementation David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  3:43 ` [PATCH v7 04/17] target/ppc: Introduce ppc_hash64_use_vrma() helper David Gibson
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

On ppc we have the concept of virtual hypervisor ("vhyp") mode, where we
only model the non-hypervisor-privileged parts of the cpu.  Essentially we
model the hypervisor's behaviour from the point of view of a guest OS, but
we don't model the hypervisor's execution.

In particular, in this mode, qemu's notion of target physical address is
a guest physical address from the vcpu's point of view.  So accesses in
guest real mode don't require translation.  If we were modelling the
hypervisor mode, we'd need to translate the guest physical address into
a host physical address.

Currently, we handle this sloppily: we rely on setting up the virtual LPCR
and RMOR registers so that GPAs are simply HPAs plus an offset, which we
set to zero.  This is already conceptually dubious, since the LPCR and RMOR
registers don't exist in the non-hypervisor portion of the CPU.  It gets
worse with POWER9, where RMOR and LPCR[VPM0] no longer exist at all.

Clean this up by explicitly handling the vhyp case.  While we're there,
remove some unnecessary nesting of if statements that made the logic to
select the correct real mode behaviour a bit less clear than it could be.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/mmu-hash64.c | 60 ++++++++++++++++++++++++-----------------
 1 file changed, 35 insertions(+), 25 deletions(-)

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 3e0be4d55f..392f90e0ae 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -789,27 +789,30 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
          */
         raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
 
-        /* In HV mode, add HRMOR if top EA bit is clear */
-        if (msr_hv || !env->has_hv_mode) {
+        if (cpu->vhyp) {
+            /*
+             * In virtual hypervisor mode, there's nothing to do:
+             *   EA == GPA == qemu guest address
+             */
+        } else if (msr_hv || !env->has_hv_mode) {
+            /* In HV mode, add HRMOR if top EA bit is clear */
             if (!(eaddr >> 63)) {
                 raddr |= env->spr[SPR_HRMOR];
             }
-        } else {
-            /* Otherwise, check VPM for RMA vs VRMA */
-            if (env->spr[SPR_LPCR] & LPCR_VPM0) {
-                slb = &env->vrma_slb;
-                if (slb->sps) {
-                    goto skip_slb_search;
-                }
-                /* Not much else to do here */
+        } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+            /* Emulated VRMA mode */
+            slb = &env->vrma_slb;
+            if (!slb->sps) {
+                /* Invalid VRMA setup, machine check */
                 cs->exception_index = POWERPC_EXCP_MCHECK;
                 env->error_code = 0;
                 return 1;
-            } else if (raddr < env->rmls) {
-                /* RMA. Check bounds in RMLS */
-                raddr |= env->spr[SPR_RMOR];
-            } else {
-                /* The access failed, generate the approriate interrupt */
+            }
+
+            goto skip_slb_search;
+        } else {
+            /* Emulated old-style RMO mode, bounds check against RMLS */
+            if (raddr >= env->rmls) {
                 if (rwx == 2) {
                     ppc_hash64_set_isi(cs, SRR1_PROTFAULT);
                 } else {
@@ -821,6 +824,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
                 }
                 return 1;
             }
+
+            raddr |= env->spr[SPR_RMOR];
         }
         tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
                      PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx,
@@ -953,22 +958,27 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
         /* In real mode the top 4 effective address bits are ignored */
         raddr = addr & 0x0FFFFFFFFFFFFFFFULL;
 
-        /* In HV mode, add HRMOR if top EA bit is clear */
-        if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
+        if (cpu->vhyp) {
+            /*
+             * In virtual hypervisor mode, there's nothing to do:
+             *   EA == GPA == qemu guest address
+             */
+            return raddr;
+        } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
+            /* In HV mode, add HRMOR if top EA bit is clear */
             return raddr | env->spr[SPR_HRMOR];
-        }
-
-        /* Otherwise, check VPM for RMA vs VRMA */
-        if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+        } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+            /* Emulated VRMA mode */
             slb = &env->vrma_slb;
             if (!slb->sps) {
                 return -1;
             }
-        } else if (raddr < env->rmls) {
-            /* RMA. Check bounds in RMLS */
-            return raddr | env->spr[SPR_RMOR];
         } else {
-            return -1;
+            /* Emulated old-style RMO mode, bounds check against RMLS */
+            if (raddr >= env->rmls) {
+                return -1;
+            }
+            return raddr | env->spr[SPR_RMOR];
         }
     } else {
         slb = slb_lookup(cpu, addr);
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 04/17] target/ppc: Introduce ppc_hash64_use_vrma() helper
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (2 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 03/17] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-05  8:59   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 05/17] spapr, ppc: Remove VPM0/RMLS hacks for POWER9 David Gibson
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

When running guests under a hypervisor, the hypervisor obviously needs to
be protected from guest accesses even if those are in what the guest
considers real mode (translation off).  The POWER hardware provides two
ways of doing that: The old way has guest real mode accesses simply offset
and bounds checked into host addresses.  It works, but requires that a
significant chunk of the guest's memory - the RMA - be physically
contiguous in the host, which is pretty inconvenient.  The new way, known
as VRMA, has guest real mode accesses translated in roughly the normal way
but with some special parameters.

In POWER7 and POWER8 the LPCR[VPM0] bit selected between the two modes, but
in POWER9 only VRMA mode is supported and LPCR[VPM0] no longer exists.  We
handle that difference in behaviour in ppc_hash64_set_isi().. but not in
other places that we blindly check LPCR[VPM0].

Correct those instances with a new helper to tell if we should be in VRMA
mode.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/mmu-hash64.c | 43 ++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 22 deletions(-)

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 392f90e0ae..e372c42add 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -668,6 +668,21 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
     return 0;
 }
 
+static bool ppc_hash64_use_vrma(CPUPPCState *env)
+{
+    switch (env->mmu_model) {
+    case POWERPC_MMU_3_00:
+        /*
+         * ISAv3.0 (POWER9) always uses VRMA, the VPM0 field and RMOR
+         * register no longer exist
+         */
+        return true;
+
+    default:
+        return !!(env->spr[SPR_LPCR] & LPCR_VPM0);
+    }
+}
+
 static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
 {
     CPUPPCState *env = &POWERPC_CPU(cs)->env;
@@ -676,15 +691,7 @@ static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
     if (msr_ir) {
         vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
     } else {
-        switch (env->mmu_model) {
-        case POWERPC_MMU_3_00:
-            /* Field deprecated in ISAv3.00 - interrupts always go to hyperv */
-            vpm = true;
-            break;
-        default:
-            vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
-            break;
-        }
+        vpm = ppc_hash64_use_vrma(env);
     }
     if (vpm && !msr_hv) {
         cs->exception_index = POWERPC_EXCP_HISI;
@@ -702,15 +709,7 @@ static void ppc_hash64_set_dsi(CPUState *cs, uint64_t dar, uint64_t dsisr)
     if (msr_dr) {
         vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
     } else {
-        switch (env->mmu_model) {
-        case POWERPC_MMU_3_00:
-            /* Field deprecated in ISAv3.00 - interrupts always go to hyperv */
-            vpm = true;
-            break;
-        default:
-            vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
-            break;
-        }
+        vpm = ppc_hash64_use_vrma(env);
     }
     if (vpm && !msr_hv) {
         cs->exception_index = POWERPC_EXCP_HDSI;
@@ -799,7 +798,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
             if (!(eaddr >> 63)) {
                 raddr |= env->spr[SPR_HRMOR];
             }
-        } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+        } else if (ppc_hash64_use_vrma(env)) {
             /* Emulated VRMA mode */
             slb = &env->vrma_slb;
             if (!slb->sps) {
@@ -967,7 +966,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
         } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
             /* In HV mode, add HRMOR if top EA bit is clear */
             return raddr | env->spr[SPR_HRMOR];
-        } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+        } else if (ppc_hash64_use_vrma(env)) {
             /* Emulated VRMA mode */
             slb = &env->vrma_slb;
             if (!slb->sps) {
@@ -1056,8 +1055,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
     slb->sps = NULL;
 
     /* Is VRMA enabled ? */
-    lpcr = env->spr[SPR_LPCR];
-    if (!(lpcr & LPCR_VPM0)) {
+    if (!ppc_hash64_use_vrma(env)) {
         return;
     }
 
@@ -1065,6 +1063,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
      * Make one up. Mostly ignore the ESID which will not be needed
      * for translation
      */
+    lpcr = env->spr[SPR_LPCR];
     vsid = SLB_VSID_VRMA;
     vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
     vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 05/17] spapr, ppc: Remove VPM0/RMLS hacks for POWER9
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (3 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 04/17] target/ppc: Introduce ppc_hash64_use_vrma() helper David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  3:43 ` [PATCH v7 06/17] target/ppc: Remove RMOR register from POWER9 & POWER10 David Gibson
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

For the "pseries" machine, we use "virtual hypervisor" mode where we
only model the CPU in non-hypervisor privileged mode.  This means that
we need guest physical addresses within the modelled cpu to be treated
as absolute physical addresses.

We used to do that by clearing LPCR[VPM0] and setting LPCR[RMLS] to a high
limit so that the old offset based translation for guest mode applied,
which does what we need.  However, POWER9 has removed support for that
translation mode, which meant we had some ugly hacks to keep it working.

We now explicitly handle this sort of translation for virtual hypervisor
mode, so the hacks aren't necessary.  We don't need to set VPM0 and RMLS
from the machine type code - they're now ignored in vhyp mode.  On the cpu
side we don't need to allow LPCR[RMLS] to be set on POWER9 in vhyp mode -
that was only there to allow the hack on the machine side.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 hw/ppc/spapr_cpu_core.c | 10 +---------
 target/ppc/mmu-hash64.c |  8 --------
 2 files changed, 1 insertion(+), 17 deletions(-)

diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index d09125d9af..36ed3a2b66 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -50,22 +50,14 @@ static void spapr_reset_vcpu(PowerPCCPU *cpu)
      * the settings below ensure proper operations with TCG in absence of
      * a real hypervisor.
      *
-     * Clearing VPM0 will also cause us to use RMOR in mmu-hash64.c for
-     * real mode accesses, which thankfully defaults to 0 and isn't
-     * accessible in guest mode.
-     *
      * Disable Power-saving mode Exit Cause exceptions for the CPU, so
      * we don't get spurious wakups before an RTAS start-cpu call.
      * For the same reason, set PSSCR_EC.
      */
-    lpcr &= ~(LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV | pcc->lpcr_pm);
+    lpcr &= ~(LPCR_VPM1 | LPCR_ISL | LPCR_KBV | pcc->lpcr_pm);
     lpcr |= LPCR_LPES0 | LPCR_LPES1;
     env->spr[SPR_PSSCR] |= PSSCR_EC;
 
-    /* Set RMLS to the max (ie, 16G) */
-    lpcr &= ~LPCR_RMLS;
-    lpcr |= 1ull << LPCR_RMLS_SHIFT;
-
     ppc_store_lpcr(cpu, lpcr);
 
     /* Set a full AMOR so guest can use the AMR as it sees fit */
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index e372c42add..caf47ad6fc 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -1126,14 +1126,6 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
                       (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
                       LPCR_DEE | LPCR_OEE)) | LPCR_MER | LPCR_GTSE | LPCR_TC |
                       LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE);
-        /*
-         * If we have a virtual hypervisor, we need to bring back RMLS. It
-         * doesn't exist on an actual P9 but that's all we know how to
-         * configure with softmmu at the moment
-         */
-        if (cpu->vhyp) {
-            lpcr |= (val & LPCR_RMLS);
-        }
         break;
     default:
         g_assert_not_reached();
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 06/17] target/ppc: Remove RMOR register from POWER9 & POWER10
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (4 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 05/17] spapr, ppc: Remove VPM0/RMLS hacks for POWER9 David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  3:43 ` [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking David Gibson
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

Currently we create the Real Mode Offset Register (RMOR) on all Book3S cpus
from POWER7 onwards.  However the translation mode which the RMOR controls
is no longer supported in POWER9, and so the register has been removed from
the architecture.

Remove it from our model on POWER9 and POWER10.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/translate_init.inc.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index aecad96db3..f7acd3d61d 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8015,12 +8015,16 @@ static void gen_spr_book3s_ids(CPUPPCState *env)
                  SPR_NOACCESS, SPR_NOACCESS,
                  &spr_read_generic, &spr_write_generic,
                  0x00000000);
-    spr_register_hv(env, SPR_RMOR, "RMOR",
+    spr_register_hv(env, SPR_HRMOR, "HRMOR",
                  SPR_NOACCESS, SPR_NOACCESS,
                  SPR_NOACCESS, SPR_NOACCESS,
                  &spr_read_generic, &spr_write_generic,
                  0x00000000);
-    spr_register_hv(env, SPR_HRMOR, "HRMOR",
+}
+
+static void gen_spr_rmor(CPUPPCState *env)
+{
+    spr_register_hv(env, SPR_RMOR, "RMOR",
                  SPR_NOACCESS, SPR_NOACCESS,
                  SPR_NOACCESS, SPR_NOACCESS,
                  &spr_read_generic, &spr_write_generic,
@@ -8497,6 +8501,7 @@ static void init_proc_POWER7(CPUPPCState *env)
 
     /* POWER7 Specific Registers */
     gen_spr_book3s_ids(env);
+    gen_spr_rmor(env);
     gen_spr_amr(env);
     gen_spr_book3s_purr(env);
     gen_spr_power5p_common(env);
@@ -8637,6 +8642,7 @@ static void init_proc_POWER8(CPUPPCState *env)
 
     /* POWER8 Specific Registers */
     gen_spr_book3s_ids(env);
+    gen_spr_rmor(env);
     gen_spr_amr(env);
     gen_spr_iamr(env);
     gen_spr_book3s_purr(env);
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (5 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 06/17] target/ppc: Remove RMOR register from POWER9 & POWER10 David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-05  8:56   ` Philippe Mathieu-Daudé
  2020-03-10 10:06   ` Cédric Le Goater
  2020-03-03  3:43 ` [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS] David Gibson
                   ` (9 subsequent siblings)
  16 siblings, 2 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

When we store the Logical Partitioning Control Register (LPCR) we have a
big switch statement to work out which are valid bits for the cpu model
we're emulating.

As well as being ugly, this isn't really conceptually correct, since it is
based on the mmu_model variable, whereas the LPCR isn't (only) about the
MMU, so mmu_model is basically just acting as a proxy for the cpu model.

Handle this in a simpler way, by adding a suitable lpcr_mask to the QOM
class.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/cpu-qom.h            |  1 +
 target/ppc/mmu-hash64.c         | 36 ++-------------------------------
 target/ppc/translate_init.inc.c | 27 +++++++++++++++++++++----
 3 files changed, 26 insertions(+), 38 deletions(-)

diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
index e499575dc8..15d6b54a7d 100644
--- a/target/ppc/cpu-qom.h
+++ b/target/ppc/cpu-qom.h
@@ -177,6 +177,7 @@ typedef struct PowerPCCPUClass {
     uint64_t insns_flags;
     uint64_t insns_flags2;
     uint64_t msr_mask;
+    uint64_t lpcr_mask;         /* Available bits in the LPCR */
     uint64_t lpcr_pm;           /* Power-saving mode Exit Cause Enable bits */
     powerpc_mmu_t   mmu_model;
     powerpc_excp_t  excp_model;
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index caf47ad6fc..0ef330a614 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -1095,42 +1095,10 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
 
 void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
 {
+    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
     CPUPPCState *env = &cpu->env;
-    uint64_t lpcr = 0;
 
-    /* Filter out bits */
-    switch (env->mmu_model) {
-    case POWERPC_MMU_2_03: /* P5p */
-        lpcr = val & (LPCR_RMLS | LPCR_ILE |
-                      LPCR_LPES0 | LPCR_LPES1 |
-                      LPCR_RMI | LPCR_HDICE);
-        break;
-    case POWERPC_MMU_2_06: /* P7 */
-        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
-                      LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
-                      LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
-                      LPCR_MER | LPCR_TC |
-                      LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE);
-        break;
-    case POWERPC_MMU_2_07: /* P8 */
-        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
-                      LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
-                      LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
-                      LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
-                      LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE);
-        break;
-    case POWERPC_MMU_3_00: /* P9 */
-        lpcr = val & (LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
-                      (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
-                      LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
-                      (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
-                      LPCR_DEE | LPCR_OEE)) | LPCR_MER | LPCR_GTSE | LPCR_TC |
-                      LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE);
-        break;
-    default:
-        g_assert_not_reached();
-    }
-    env->spr[SPR_LPCR] = lpcr;
+    env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
     ppc_hash64_update_rmls(cpu);
     ppc_hash64_update_vrma(cpu);
 }
diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index f7acd3d61d..68aa4dfad8 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -8476,6 +8476,8 @@ POWERPC_FAMILY(POWER5P)(ObjectClass *oc, void *data)
                     (1ull << MSR_DR) |
                     (1ull << MSR_PMM) |
                     (1ull << MSR_RI);
+    pcc->lpcr_mask = LPCR_RMLS | LPCR_ILE | LPCR_LPES0 | LPCR_LPES1 |
+        LPCR_RMI | LPCR_HDICE;
     pcc->mmu_model = POWERPC_MMU_2_03;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
@@ -8614,6 +8616,12 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
                     (1ull << MSR_PMM) |
                     (1ull << MSR_RI) |
                     (1ull << MSR_LE);
+    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
+        LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
+        LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
+        LPCR_MER | LPCR_TC |
+        LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE;
+    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
     pcc->mmu_model = POWERPC_MMU_2_06;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
@@ -8630,7 +8638,6 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
     pcc->l1_dcache_size = 0x8000;
     pcc->l1_icache_size = 0x8000;
     pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
-    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
 }
 
 static void init_proc_POWER8(CPUPPCState *env)
@@ -8785,6 +8792,13 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
                     (1ull << MSR_TS0) |
                     (1ull << MSR_TS1) |
                     (1ull << MSR_LE);
+    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
+        LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
+        LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
+        LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
+        LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE;
+    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
+                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
     pcc->mmu_model = POWERPC_MMU_2_07;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
@@ -8802,8 +8816,6 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
     pcc->l1_dcache_size = 0x8000;
     pcc->l1_icache_size = 0x8000;
     pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
-    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
-                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
 }
 
 #ifdef CONFIG_SOFTMMU
@@ -8995,6 +9007,14 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
                     (1ull << MSR_PMM) |
                     (1ull << MSR_RI) |
                     (1ull << MSR_LE);
+    pcc->lpcr_mask = LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
+        (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
+        LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
+        (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
+                             LPCR_DEE | LPCR_OEE))
+        | LPCR_MER | LPCR_GTSE | LPCR_TC |
+        LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE;
+    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
     pcc->mmu_model = POWERPC_MMU_3_00;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc64_v3_handle_mmu_fault;
@@ -9014,7 +9034,6 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
     pcc->l1_dcache_size = 0x8000;
     pcc->l1_icache_size = 0x8000;
     pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
-    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
 }
 
 #ifdef CONFIG_SOFTMMU
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS]
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (6 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  7:52   ` Greg Kurz
  2020-03-05  8:53   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 09/17] target/ppc: Correct RMLS table David Gibson
                   ` (8 subsequent siblings)
  16 siblings, 2 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

Currently we use a big switch statement in ppc_hash64_update_rmls() to work
out what the right RMA limit is based on the LPCR[RMLS] field.  There's no
formula for this - it's just an arbitrary mapping defined by the existing
CPU implementations - but we can make it a bit more readable by using a
lookup table rather than a switch.  In addition we can use the MiB/GiB
symbols to make it a bit clearer.

While there we add a bit of clarity and rationale to the comment about
what happens if the LPCR[RMLS] doesn't contain a valid value.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-hash64.c | 63 ++++++++++++++++++-----------------------
 1 file changed, 27 insertions(+), 36 deletions(-)

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 0ef330a614..934989e6d9 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -18,6 +18,7 @@
  * License along with this library; if not, see <http://www.gnu.org/licenses/>.
  */
 #include "qemu/osdep.h"
+#include "qemu/units.h"
 #include "cpu.h"
 #include "exec/exec-all.h"
 #include "exec/helper-proto.h"
@@ -757,6 +758,31 @@ static void ppc_hash64_set_c(PowerPCCPU *cpu, hwaddr ptex, uint64_t pte1)
     stb_phys(CPU(cpu)->as, base + offset, (pte1 & 0xff) | 0x80);
 }
 
+static target_ulong rmls_limit(PowerPCCPU *cpu)
+{
+    CPUPPCState *env = &cpu->env;
+    /*
+     * This is the full 4 bits encoding of POWER8. Previous
+     * CPUs only support a subset of these but the filtering
+     * is done when writing LPCR.
+     *
+     * Unsupported values mean the OS has shot itself in the
+     * foot. Return a 0-sized RMA in this case, which we expect
+     * to trigger an immediate DSI or ISI
+     */
+    static const target_ulong rma_sizes[16] = {
+        [1] = 16 * GiB,
+        [2] = 1 * GiB,
+        [3] = 64 * MiB,
+        [4] = 256 * MiB,
+        [7] = 128 * MiB,
+        [8] = 32 * MiB,
+    };
+    target_ulong rmls = (env->spr[SPR_LPCR] & LPCR_RMLS) >> LPCR_RMLS_SHIFT;
+
+    return rma_sizes[rmls];
+}
+
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
                                 int rwx, int mmu_idx)
 {
@@ -1006,41 +1032,6 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
     cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
 }
 
-static void ppc_hash64_update_rmls(PowerPCCPU *cpu)
-{
-    CPUPPCState *env = &cpu->env;
-    uint64_t lpcr = env->spr[SPR_LPCR];
-
-    /*
-     * This is the full 4 bits encoding of POWER8. Previous
-     * CPUs only support a subset of these but the filtering
-     * is done when writing LPCR
-     */
-    switch ((lpcr & LPCR_RMLS) >> LPCR_RMLS_SHIFT) {
-    case 0x8: /* 32MB */
-        env->rmls = 0x2000000ull;
-        break;
-    case 0x3: /* 64MB */
-        env->rmls = 0x4000000ull;
-        break;
-    case 0x7: /* 128MB */
-        env->rmls = 0x8000000ull;
-        break;
-    case 0x4: /* 256MB */
-        env->rmls = 0x10000000ull;
-        break;
-    case 0x2: /* 1GB */
-        env->rmls = 0x40000000ull;
-        break;
-    case 0x1: /* 16GB */
-        env->rmls = 0x400000000ull;
-        break;
-    default:
-        /* What to do here ??? */
-        env->rmls = 0;
-    }
-}
-
 static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
 {
     CPUPPCState *env = &cpu->env;
@@ -1099,7 +1090,7 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
     CPUPPCState *env = &cpu->env;
 
     env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
-    ppc_hash64_update_rmls(cpu);
+    env->rmls = rmls_limit(cpu);
     ppc_hash64_update_vrma(cpu);
 }
 
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 09/17] target/ppc: Correct RMLS table
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (7 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS] David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  3:43 ` [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand David Gibson
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

The table of RMA limits based on the LPCR[RMLS] field is slightly wrong.
We're missing the RMLS == 0 => 256 GiB RMA option, which is available on
POWER8, so add that.

The comment that goes with the table is much more wrong.  We *don't* filter
invalid RMLS values when writing the LPCR, and there's not really a
sensible way to do so.  Furthermore, while in theory the set of RMLS values
is implementation dependent, it seems in practice the same set has been
available since around POWER4+ up until POWER8, the last model which
supports RMLS at all.  So, correct that as well.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 target/ppc/mmu-hash64.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 934989e6d9..fcccaabb88 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -762,15 +762,16 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
 {
     CPUPPCState *env = &cpu->env;
     /*
-     * This is the full 4 bits encoding of POWER8. Previous
-     * CPUs only support a subset of these but the filtering
-     * is done when writing LPCR.
+     * In theory the meanings of RMLS values are implementation
+     * dependent.  In practice, this seems to have been the set from
+     * POWER4+..POWER8, and RMLS is no longer supported in POWER9.
      *
      * Unsupported values mean the OS has shot itself in the
      * foot. Return a 0-sized RMA in this case, which we expect
      * to trigger an immediate DSI or ISI
      */
     static const target_ulong rma_sizes[16] = {
+        [0] = 256 * GiB,
         [1] = 16 * GiB,
         [2] = 1 * GiB,
         [3] = 64 * MiB,
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (8 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 09/17] target/ppc: Correct RMLS table David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  8:57   ` Greg Kurz
  2020-03-05  8:48   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently David Gibson
                   ` (6 subsequent siblings)
  16 siblings, 2 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

When the LPCR is written, we update the env->rmls field with the RMA limit
it implies.  Simplify things by just calculating the value directly from
the LPCR value when we need it.

It's possible this is a little slower, but it's unlikely to be significant,
since this is only for real mode accesses in a translation configuration
that's not used very often, and the whole thing is behind the qemu TLB
anyway.  Therefore, keeping the number of state variables down and not
having to worry about making sure it's always in sync seems the better
option.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target/ppc/cpu.h        | 1 -
 target/ppc/mmu-hash64.c | 9 ++++++---
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 8077fdb068..f9871b1233 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1046,7 +1046,6 @@ struct CPUPPCState {
     uint64_t insns_flags2;
 #if defined(TARGET_PPC64)
     ppc_slb_t vrma_slb;
-    target_ulong rmls;
 #endif
 
     int error_code;
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index fcccaabb88..4fd7b7ee74 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -837,8 +837,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
 
             goto skip_slb_search;
         } else {
+            target_ulong limit = rmls_limit(cpu);
+
             /* Emulated old-style RMO mode, bounds check against RMLS */
-            if (raddr >= env->rmls) {
+            if (raddr >= limit) {
                 if (rwx == 2) {
                     ppc_hash64_set_isi(cs, SRR1_PROTFAULT);
                 } else {
@@ -1000,8 +1002,10 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
                 return -1;
             }
         } else {
+            target_ulong limit = rmls_limit(cpu);
+
             /* Emulated old-style RMO mode, bounds check against RMLS */
-            if (raddr >= env->rmls) {
+            if (raddr >= limit) {
                 return -1;
             }
             return raddr | env->spr[SPR_RMOR];
@@ -1091,7 +1095,6 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
     CPUPPCState *env = &cpu->env;
 
     env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
-    env->rmls = rmls_limit(cpu);
     ppc_hash64_update_vrma(cpu);
 }
 
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (9 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  9:37   ` Greg Kurz
  2020-03-05  9:17   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 12/17] spapr: Don't use weird units for MIN_RMA_SLOF David Gibson
                   ` (5 subsequent siblings)
  16 siblings, 2 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

Currently, we construct the SLBE used for VRMA translations when the LPCR
is written (which controls some bits in the SLBE), then use it later for
translations.

This is a bit complex and confusing - simplify it by simply constructing
the SLBE directly from the LPCR when we need it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target/ppc/cpu.h        |  3 --
 target/ppc/mmu-hash64.c | 92 ++++++++++++++++-------------------------
 2 files changed, 35 insertions(+), 60 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index f9871b1233..5a55fb02bd 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1044,9 +1044,6 @@ struct CPUPPCState {
     uint32_t flags;
     uint64_t insns_flags;
     uint64_t insns_flags2;
-#if defined(TARGET_PPC64)
-    ppc_slb_t vrma_slb;
-#endif
 
     int error_code;
     uint32_t pending_interrupts;
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 4fd7b7ee74..34f6009b1e 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -784,11 +784,41 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
     return rma_sizes[rmls];
 }
 
+static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
+{
+    CPUPPCState *env = &cpu->env;
+    target_ulong lpcr = env->spr[SPR_LPCR];
+    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
+    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
+    int i;
+
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
+
+        if (!sps->page_shift) {
+            break;
+        }
+
+        if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
+            slb->esid = SLB_ESID_V;
+            slb->vsid = vsid;
+            slb->sps = sps;
+            return 0;
+        }
+    }
+
+    error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
+                 TARGET_FMT_lx"\n", lpcr);
+
+    return -1;
+}
+
 int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
                                 int rwx, int mmu_idx)
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
+    ppc_slb_t vrma_slbe;
     ppc_slb_t *slb;
     unsigned apshift;
     hwaddr ptex;
@@ -827,8 +857,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
             }
         } else if (ppc_hash64_use_vrma(env)) {
             /* Emulated VRMA mode */
-            slb = &env->vrma_slb;
-            if (!slb->sps) {
+            slb = &vrma_slbe;
+            if (build_vrma_slbe(cpu, slb) != 0) {
                 /* Invalid VRMA setup, machine check */
                 cs->exception_index = POWERPC_EXCP_MCHECK;
                 env->error_code = 0;
@@ -976,6 +1006,7 @@ skip_slb_search:
 hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
 {
     CPUPPCState *env = &cpu->env;
+    ppc_slb_t vrma_slbe;
     ppc_slb_t *slb;
     hwaddr ptex, raddr;
     ppc_hash_pte64_t pte;
@@ -997,8 +1028,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
             return raddr | env->spr[SPR_HRMOR];
         } else if (ppc_hash64_use_vrma(env)) {
             /* Emulated VRMA mode */
-            slb = &env->vrma_slb;
-            if (!slb->sps) {
+            slb = &vrma_slbe;
+            if (build_vrma_slbe(cpu, slb) != 0) {
                 return -1;
             }
         } else {
@@ -1037,65 +1068,12 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
     cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
 }
 
-static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
-{
-    CPUPPCState *env = &cpu->env;
-    const PPCHash64SegmentPageSizes *sps = NULL;
-    target_ulong esid, vsid, lpcr;
-    ppc_slb_t *slb = &env->vrma_slb;
-    uint32_t vrmasd;
-    int i;
-
-    /* First clear it */
-    slb->esid = slb->vsid = 0;
-    slb->sps = NULL;
-
-    /* Is VRMA enabled ? */
-    if (!ppc_hash64_use_vrma(env)) {
-        return;
-    }
-
-    /*
-     * Make one up. Mostly ignore the ESID which will not be needed
-     * for translation
-     */
-    lpcr = env->spr[SPR_LPCR];
-    vsid = SLB_VSID_VRMA;
-    vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
-    vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
-    esid = SLB_ESID_V;
-
-    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
-        const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
-
-        if (!sps1->page_shift) {
-            break;
-        }
-
-        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
-            sps = sps1;
-            break;
-        }
-    }
-
-    if (!sps) {
-        error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
-                     " vsid 0x"TARGET_FMT_lx, esid, vsid);
-        return;
-    }
-
-    slb->vsid = vsid;
-    slb->esid = esid;
-    slb->sps = sps;
-}
-
 void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
 {
     PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
     CPUPPCState *env = &cpu->env;
 
     env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
-    ppc_hash64_update_vrma(cpu);
 }
 
 void helper_store_lpcr(CPUPPCState *env, target_ulong val)
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 12/17] spapr: Don't use weird units for MIN_RMA_SLOF
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (10 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-05  8:39   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 13/17] spapr,ppc: Simplify signature of kvmppc_rma_size() David Gibson
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

MIN_RMA_SLOF records the minimum about of RMA that the SLOF firmware
requires.  It lets us give a meaningful error if the RMA ends up too small,
rather than just letting SLOF crash.

It's currently stored as a number of megabytes, which is strange for global
constants.  Move that megabyte scaling into the definition of the constant
like most other things use.

Change from M to MiB in the associated message while we're at it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
---
 hw/ppc/spapr.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index cc10798be4..510494ad87 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -103,7 +103,7 @@
 #define FW_OVERHEAD             0x2800000
 #define KERNEL_LOAD_ADDR        FW_MAX_SIZE
 
-#define MIN_RMA_SLOF            128UL
+#define MIN_RMA_SLOF            (128 * MiB)
 
 #define PHANDLE_INTC            0x00001111
 
@@ -2956,10 +2956,10 @@ static void spapr_machine_init(MachineState *machine)
         }
     }
 
-    if (spapr->rma_size < (MIN_RMA_SLOF * MiB)) {
+    if (spapr->rma_size < MIN_RMA_SLOF) {
         error_report(
-            "pSeries SLOF firmware requires >= %ldM guest RMA (Real Mode Area memory)",
-            MIN_RMA_SLOF);
+            "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
+            MIN_RMA_SLOF / MiB);
         exit(1);
     }
 
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 13/17] spapr,ppc: Simplify signature of kvmppc_rma_size()
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (11 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 12/17] spapr: Don't use weird units for MIN_RMA_SLOF David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-05  8:47   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 14/17] spapr: Don't attempt to clamp RMA to VRMA constraint David Gibson
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, Cedric Le Goater, David Gibson

This function calculates the maximum size of the RMA as implied by the
host's page size of structure of the VRMA (there are a number of other
constraints on the RMA size which will supersede this one in many
circumstances).

The current interface takes the current RMA size estimate, and clamps it
to the VRMA derived size.  The only current caller passes in an arguably
wrong value (it will match the current RMA estimate in some but not all
cases).

We want to fix that, but for now just keep concerns separated by having the
KVM helper function just return the VRMA derived limit, and let the caller
combine it with other constraints.  We call the new function
kvmppc_vrma_limit() to more clearly indicate its limited responsibility.

The helper should only ever be called in the KVM enabled case, so replace
its !CONFIG_KVM stub with an assert() rather than a dummy value.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cedric Le Goater <clg@fr.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 hw/ppc/spapr.c       | 5 +++--
 target/ppc/kvm.c     | 5 ++---
 target/ppc/kvm_ppc.h | 7 +++----
 3 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 510494ad87..18bf4bc3de 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1586,8 +1586,9 @@ void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr)
     spapr_reallocate_hpt(spapr, hpt_shift, &error_fatal);
 
     if (spapr->vrma_adjust) {
-        spapr->rma_size = kvmppc_rma_size(spapr_node0_size(MACHINE(spapr)),
-                                          spapr->htab_shift);
+        hwaddr vrma_limit = kvmppc_vrma_limit(spapr->htab_shift);
+
+        spapr->rma_size = MIN(spapr_node0_size(MACHINE(spapr)), vrma_limit);
     }
 }
 
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index 7f44b1aa1a..597f72be1b 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -2113,7 +2113,7 @@ void kvmppc_error_append_smt_possible_hint(Error *const *errp)
 
 
 #ifdef TARGET_PPC64
-uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift)
+uint64_t kvmppc_vrma_limit(unsigned int hash_shift)
 {
     struct kvm_ppc_smmu_info info;
     long rampagesize, best_page_shift;
@@ -2140,8 +2140,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift)
         }
     }
 
-    return MIN(current_size,
-               1ULL << (best_page_shift + hash_shift - 7));
+    return 1ULL << (best_page_shift + hash_shift - 7);
 }
 #endif
 
diff --git a/target/ppc/kvm_ppc.h b/target/ppc/kvm_ppc.h
index 9e4f2357cc..332fa0aa1c 100644
--- a/target/ppc/kvm_ppc.h
+++ b/target/ppc/kvm_ppc.h
@@ -47,7 +47,7 @@ void *kvmppc_create_spapr_tce(uint32_t liobn, uint32_t page_shift,
                               int *pfd, bool need_vfio);
 int kvmppc_remove_spapr_tce(void *table, int pfd, uint32_t window_size);
 int kvmppc_reset_htab(int shift_hint);
-uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift);
+uint64_t kvmppc_vrma_limit(unsigned int hash_shift);
 bool kvmppc_has_cap_spapr_vfio(void);
 #endif /* !CONFIG_USER_ONLY */
 bool kvmppc_has_cap_epr(void);
@@ -255,10 +255,9 @@ static inline int kvmppc_reset_htab(int shift_hint)
     return 0;
 }
 
-static inline uint64_t kvmppc_rma_size(uint64_t current_size,
-                                       unsigned int hash_shift)
+static inline uint64_t kvmppc_vrma_limit(unsigned int hash_shift)
 {
-    return ram_size;
+    g_assert_not_reached();
 }
 
 static inline bool kvmppc_hpt_needs_host_contiguous_pages(void)
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 14/17] spapr: Don't attempt to clamp RMA to VRMA constraint
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (12 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 13/17] spapr,ppc: Simplify signature of kvmppc_rma_size() David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03  9:55   ` Greg Kurz
  2020-03-03  3:43 ` [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types David Gibson
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

The Real Mode Area (RMA) is the part of memory which a guest can access
when in real (MMU off) mode.  Of course, for a guest under KVM, the MMU
isn't really turned off, it's just in a special translation mode - Virtual
Real Mode Area (VRMA) - which looks like real mode in guest mode.

The mechanics of how this works when using the hash MMU (HPT) put a
constraint on the size of the RMA, which depends on the size of the
HPT.  So, the latter part of spapr_setup_hpt_and_vrma() clamps the RMA
we advertise to the guest based on this VRMA limit.

There are several things wrong with this:
 1) spapr_setup_hpt_and_vrma() doesn't actually clamp, it takes the minimum
    of Node 0 memory size and the VRMA limit.  That will *often* work the
    same as clamping, but there can be other constraints on RMA size which
    supersede Node 0 memory size.  We have real bugs caused by this
    (currently worked around in the guest kernel)
 2) Some callers of spapr_setup_hpt_and_vrma() are in a situation where
    we're past the point that we can actually advertise an RMA limit to the
    guest
 3) But most fundamentally, the VRMA limit depends on host configuration
    (page size) which shouldn't be visible to the guest, but this partially
    exposes it.  This can cause problems with migration in certain edge
    cases, although we will mostly get away with it.

In practice, this clamping is almost never applied anyway.  With 64kiB
pages and the normal rules for sizing of the HPT, the theoretical VRMA
limit will be 4x(guest memory size) and so never hit.  It will hit with
4kiB pages, where it will be (guest memory size)/4.  However all mainstream
distro kernels for POWER have used a 64kiB page size for at least 10 years.

So, simply replace this logic with a check that the RMA we've calculated
based only on guest visible configuration will fit within the host implied
VRMA limit.  This can break if running HPT guests on a host kernel with
4kiB page size.  As noted that's very rare.  There also exist several
possible workarounds:
  * Change the host kernel to use 64kiB pages
  * Use radix MMU (RPT) guests instead of HPT
  * Use 64kiB hugepages on the host to back guest memory
  * Increase the guest memory size so that the RMA hits one of the fixed
    limits before the RMA limit.  This is relatively easy on POWER8 which
    has a 16GiB limit, harder on POWER9 which has a 1TiB limit.
  * Use a guest NUMA configuration which artificially constrains the RMA
    within the VRMA limit (the RMA must always fit within Node 0).

Previously, on KVM, we also temporarily reduced the rma_size to 256M so
that the we'd load the kernel and initrd safely, regardless of the VRMA
limit.  This was a) confusing, b) could significantly limit the size of
images we could load and c) introduced a behavioural difference between
KVM and TCG.  So we remove that as well.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 hw/ppc/spapr.c         | 28 ++++++++++------------------
 hw/ppc/spapr_hcall.c   |  4 ++--
 include/hw/ppc/spapr.h |  3 +--
 3 files changed, 13 insertions(+), 22 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 18bf4bc3de..ef7667455c 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1569,7 +1569,7 @@ void spapr_reallocate_hpt(SpaprMachineState *spapr, int shift,
     spapr_set_all_lpcrs(0, LPCR_HR | LPCR_UPRT);
 }
 
-void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr)
+void spapr_setup_hpt(SpaprMachineState *spapr)
 {
     int hpt_shift;
 
@@ -1585,10 +1585,16 @@ void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr)
     }
     spapr_reallocate_hpt(spapr, hpt_shift, &error_fatal);
 
-    if (spapr->vrma_adjust) {
+    if (kvm_enabled()) {
         hwaddr vrma_limit = kvmppc_vrma_limit(spapr->htab_shift);
 
-        spapr->rma_size = MIN(spapr_node0_size(MACHINE(spapr)), vrma_limit);
+        /* Check our RMA fits in the possible VRMA */
+        if (vrma_limit < spapr->rma_size) {
+            error_report("Unable to create %" HWADDR_PRIu
+                         "MiB RMA (VRMA only allows %" HWADDR_PRIu "MiB",
+                         spapr->rma_size / MiB, vrma_limit / MiB);
+            exit(EXIT_FAILURE);
+        }
     }
 }
 
@@ -1628,7 +1634,7 @@ static void spapr_machine_reset(MachineState *machine)
         spapr->patb_entry = PATE1_GR;
         spapr_set_all_lpcrs(LPCR_HR | LPCR_UPRT, LPCR_HR | LPCR_UPRT);
     } else {
-        spapr_setup_hpt_and_vrma(spapr);
+        spapr_setup_hpt(spapr);
     }
 
     qemu_devices_reset();
@@ -2695,20 +2701,6 @@ static void spapr_machine_init(MachineState *machine)
 
     spapr->rma_size = node0_size;
 
-    /* With KVM, we don't actually know whether KVM supports an
-     * unbounded RMA (PR KVM) or is limited by the hash table size
-     * (HV KVM using VRMA), so we always assume the latter
-     *
-     * In that case, we also limit the initial allocations for RTAS
-     * etc... to 256M since we have no way to know what the VRMA size
-     * is going to be as it depends on the size of the hash table
-     * which isn't determined yet.
-     */
-    if (kvm_enabled()) {
-        spapr->vrma_adjust = 1;
-        spapr->rma_size = MIN(spapr->rma_size, 0x10000000);
-    }
-
     /* Actually we don't support unbounded RMA anymore since we added
      * proper emulation of HV mode. The max we can get is 16G which
      * also happens to be what we configure for PAPR mode so make sure
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index c2b3286625..40c86e91eb 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1458,7 +1458,7 @@ static void spapr_check_setup_free_hpt(SpaprMachineState *spapr,
         spapr_free_hpt(spapr);
     } else if (!(patbe_new & PATE1_GR)) {
         /* RADIX->HASH || NOTHING->HASH : Allocate HPT */
-        spapr_setup_hpt_and_vrma(spapr);
+        spapr_setup_hpt(spapr);
     }
     return;
 }
@@ -1845,7 +1845,7 @@ static target_ulong h_client_architecture_support(PowerPCCPU *cpu,
          * (because the guest isn't going to use radix) then set it up here. */
         if ((spapr->patb_entry & PATE1_GR) && !guest_radix) {
             /* legacy hash or new hash: */
-            spapr_setup_hpt_and_vrma(spapr);
+            spapr_setup_hpt(spapr);
         }
 
         if (fdt_bufsize < sizeof(hdr)) {
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index a4216935a1..90dbc55931 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -156,7 +156,6 @@ struct SpaprMachineState {
     SpaprPendingHpt *pending_hpt; /* in-progress resize */
 
     hwaddr rma_size;
-    int vrma_adjust;
     uint32_t fdt_size;
     uint32_t fdt_initial_size;
     void *fdt_blob;
@@ -795,7 +794,7 @@ void *spapr_build_fdt(SpaprMachineState *spapr, bool reset, size_t space);
 void spapr_events_init(SpaprMachineState *sm);
 void spapr_dt_events(SpaprMachineState *sm, void *fdt);
 void close_htab_fd(SpaprMachineState *spapr);
-void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr);
+void spapr_setup_hpt(SpaprMachineState *spapr);
 void spapr_free_hpt(SpaprMachineState *spapr);
 SpaprTceTable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn);
 void spapr_tce_table_enable(SpaprTceTable *tcet,
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (13 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 14/17] spapr: Don't attempt to clamp RMA to VRMA constraint David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03 10:02   ` Greg Kurz
  2020-03-05  8:45   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 16/17] spapr: Clean up RMA size calculation David Gibson
  2020-03-03  3:43 ` [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller David Gibson
  16 siblings, 2 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

In spapr_machine_init() we clamp the size of the RMA to 16GiB and the
comment saying why doesn't make a whole lot of sense.  In fact, this was
done because the real mode handling code elsewhere limited the RMA in TCG
mode to the maximum value configurable in LPCR[RMLS], 16GiB.

But,
 * Actually LPCR[RMLS] has been able to encode a 256GiB size for a very
   long time, we just didn't implement it properly in the softmmu
 * LPCR[RMLS] shouldn't really be relevant anyway, it only was because we
   used to abuse the RMOR based translation mode in order to handle the
   fact that we're not modelling the hypervisor parts of the cpu

We've now removed those limitations in the modelling so the 16GiB clamp no
longer serves a function.  However, we can't just remove the limit
universally: that would break migration to earlier qemu versions, where
the 16GiB RMLS limit still applies, no matter how bad the reasons for it
are.

So, we replace the 16GiB clamp, with a clamp to a limit defined in the
machine type class.  We set it to 16 GiB for machine types 4.2 and earlier,
but set it to 0 meaning unlimited for the new 5.0 machine type.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr.c         | 13 ++++++++-----
 include/hw/ppc/spapr.h |  1 +
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index ef7667455c..95bda4a615 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2701,12 +2701,14 @@ static void spapr_machine_init(MachineState *machine)
 
     spapr->rma_size = node0_size;
 
-    /* Actually we don't support unbounded RMA anymore since we added
-     * proper emulation of HV mode. The max we can get is 16G which
-     * also happens to be what we configure for PAPR mode so make sure
-     * we don't do anything bigger than that
+    /*
+     * Clamp the RMA size based on machine type.  This is for
+     * migration compatibility with older qemu versions, which limited
+     * the RMA size for complicated and mostly bad reasons.
      */
-    spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);
+    if (smc->rma_limit) {
+        spapr->rma_size = MIN(spapr->rma_size, smc->rma_limit);
+    }
 
     if (spapr->rma_size > node0_size) {
         error_report("Numa node 0 has to span the RMA (%#08"HWADDR_PRIx")",
@@ -4598,6 +4600,7 @@ static void spapr_machine_4_2_class_options(MachineClass *mc)
     compat_props_add(mc->compat_props, hw_compat_4_2, hw_compat_4_2_len);
     smc->default_caps.caps[SPAPR_CAP_CCF_ASSIST] = SPAPR_CAP_OFF;
     smc->default_caps.caps[SPAPR_CAP_FWNMI_MCE] = SPAPR_CAP_OFF;
+    smc->rma_limit = 16 * GiB;
     mc->nvdimm_supported = false;
 }
 
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 90dbc55931..2015e37ac5 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -126,6 +126,7 @@ struct SpaprMachineClass {
     bool pre_4_1_migration; /* don't migrate hpt-max-page-size */
     bool linux_pci_probe;
     bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
+    hwaddr rma_limit;          /* clamp the RMA to this size */
 
     void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
                           uint64_t *buid, hwaddr *pio, 
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 16/17] spapr: Clean up RMA size calculation
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (14 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03 10:18   ` Greg Kurz
  2020-03-05  8:43   ` Philippe Mathieu-Daudé
  2020-03-03  3:43 ` [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller David Gibson
  16 siblings, 2 replies; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

Move the calculation of the Real Mode Area (RMA) size into a helper
function.  While we're there clean it up and correct it in a few ways:
  * Add comments making it clearer where the various constraints come from
  * Remove a pointless check that the RMA fits within Node 0 (we've just
    clamped it so that it does)

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr.c | 60 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 36 insertions(+), 24 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 95bda4a615..2eb0d8f70d 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2648,6 +2648,41 @@ static PCIHostState *spapr_create_default_phb(void)
     return PCI_HOST_BRIDGE(dev);
 }
 
+static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
+{
+    MachineState *machine = MACHINE(spapr);
+    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
+    hwaddr rma_size = machine->ram_size;
+    hwaddr node0_size = spapr_node0_size(machine);
+
+    /* RMA has to fit in the first NUMA node */
+    rma_size = MIN(rma_size, node0_size);
+
+    /*
+     * VRMA access is via a special 1TiB SLB mapping, so the RMA can
+     * never exceed that
+     */
+    rma_size = MIN(rma_size, 1 * TiB);
+
+    /*
+     * Clamp the RMA size based on machine type.  This is for
+     * migration compatibility with older qemu versions, which limited
+     * the RMA size for complicated and mostly bad reasons.
+     */
+    if (smc->rma_limit) {
+        rma_size = MIN(rma_size, smc->rma_limit);
+    }
+
+    if (rma_size < MIN_RMA_SLOF) {
+        error_setg(errp,
+"pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
+                   MIN_RMA_SLOF / MiB);
+        return 0;
+    }
+
+    return rma_size;
+}
+
 /* pSeries LPAR / sPAPR hardware init */
 static void spapr_machine_init(MachineState *machine)
 {
@@ -2659,7 +2694,6 @@ static void spapr_machine_init(MachineState *machine)
     PCIHostState *phb;
     int i;
     MemoryRegion *sysmem = get_system_memory();
-    hwaddr node0_size = spapr_node0_size(machine);
     long load_limit, fw_size;
     char *filename;
     Error *resize_hpt_err = NULL;
@@ -2699,22 +2733,7 @@ static void spapr_machine_init(MachineState *machine)
         exit(1);
     }
 
-    spapr->rma_size = node0_size;
-
-    /*
-     * Clamp the RMA size based on machine type.  This is for
-     * migration compatibility with older qemu versions, which limited
-     * the RMA size for complicated and mostly bad reasons.
-     */
-    if (smc->rma_limit) {
-        spapr->rma_size = MIN(spapr->rma_size, smc->rma_limit);
-    }
-
-    if (spapr->rma_size > node0_size) {
-        error_report("Numa node 0 has to span the RMA (%#08"HWADDR_PRIx")",
-                     spapr->rma_size);
-        exit(1);
-    }
+    spapr->rma_size = spapr_rma_size(spapr, &error_fatal);
 
     /* Setup a load limit for the ramdisk leaving room for SLOF and FDT */
     load_limit = MIN(spapr->rma_size, RTAS_MAX_ADDR) - FW_OVERHEAD;
@@ -2951,13 +2970,6 @@ static void spapr_machine_init(MachineState *machine)
         }
     }
 
-    if (spapr->rma_size < MIN_RMA_SLOF) {
-        error_report(
-            "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
-            MIN_RMA_SLOF / MiB);
-        exit(1);
-    }
-
     if (kernel_filename) {
         uint64_t lowaddr = 0;
 
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller
  2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
                   ` (15 preceding siblings ...)
  2020-03-03  3:43 ` [PATCH v7 16/17] spapr: Clean up RMA size calculation David Gibson
@ 2020-03-03  3:43 ` David Gibson
  2020-03-03 10:32   ` Greg Kurz
  16 siblings, 1 reply; 40+ messages in thread
From: David Gibson @ 2020-03-03  3:43 UTC (permalink / raw)
  To: qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov, David Gibson

The Real Mode Area (RMA) needs to fit within the NUMA node owning memory
at address 0.  That's usually node 0, but can be a later one if there are
some nodes which have no memory (only CPUs).

This is currently handled by the spapr_node0_size() helper.  It has only
one caller, so there's not a lot of point splitting it out.  It's also
extremely easy to misread the code as clamping to the size of the smallest
node rather than the first node with any memory.

So, fold it into the caller, and add some commentary to make it a bit
clearer exactly what it's doing.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr.c | 37 +++++++++++++++++++++----------------
 1 file changed, 21 insertions(+), 16 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 2eb0d8f70d..d674a9f48f 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -296,20 +296,6 @@ static void spapr_populate_pa_features(SpaprMachineState *spapr,
     _FDT((fdt_setprop(fdt, offset, "ibm,pa-features", pa_features, pa_size)));
 }
 
-static hwaddr spapr_node0_size(MachineState *machine)
-{
-    if (machine->numa_state->num_nodes) {
-        int i;
-        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
-            if (machine->numa_state->nodes[i].node_mem) {
-                return MIN(pow2floor(machine->numa_state->nodes[i].node_mem),
-                           machine->ram_size);
-            }
-        }
-    }
-    return machine->ram_size;
-}
-
 static void add_str(GString *s, const gchar *s1)
 {
     g_string_append_len(s, s1, strlen(s1) + 1);
@@ -2653,10 +2639,24 @@ static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
     MachineState *machine = MACHINE(spapr);
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     hwaddr rma_size = machine->ram_size;
-    hwaddr node0_size = spapr_node0_size(machine);
 
     /* RMA has to fit in the first NUMA node */
-    rma_size = MIN(rma_size, node0_size);
+    if (machine->numa_state->num_nodes) {
+        /*
+         * It's possible for there to be some zero-memory nodes first
+         * in the list.  We need the RMA to fit inside the memory of
+         * the first node which actually has some memory.
+         */
+        int i;
+
+        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
+            if (machine->numa_state->nodes[i].node_mem != 0) {
+                rma_size = MIN(rma_size,
+                               machine->numa_state->nodes[i].node_mem);
+                break;
+            }
+        }
+    }
 
     /*
      * VRMA access is via a special 1TiB SLB mapping, so the RMA can
@@ -2673,6 +2673,11 @@ static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
         rma_size = MIN(rma_size, smc->rma_limit);
     }
 
+    /*
+     * RMA size must be a power of 2
+     */
+    rma_size = pow2floor(rma_size);
+
     if (rma_size < MIN_RMA_SLOF) {
         error_setg(errp,
 "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS]
  2020-03-03  3:43 ` [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS] David Gibson
@ 2020-03-03  7:52   ` Greg Kurz
  2020-03-05  8:53   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Greg Kurz @ 2020-03-03  7:52 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:42 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> Currently we use a big switch statement in ppc_hash64_update_rmls() to work
> out what the right RMA limit is based on the LPCR[RMLS] field.  There's no
> formula for this - it's just an arbitrary mapping defined by the existing
> CPU implementations - but we can make it a bit more readable by using a
> lookup table rather than a switch.  In addition we can use the MiB/GiB
> symbols to make it a bit clearer.
> 
> While there we add a bit of clarity and rationale to the comment about
> what happens if the LPCR[RMLS] doesn't contain a valid value.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/mmu-hash64.c | 63 ++++++++++++++++++-----------------------
>  1 file changed, 27 insertions(+), 36 deletions(-)
> 
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 0ef330a614..934989e6d9 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -18,6 +18,7 @@
>   * License along with this library; if not, see <http://www.gnu.org/licenses/>.
>   */
>  #include "qemu/osdep.h"
> +#include "qemu/units.h"
>  #include "cpu.h"
>  #include "exec/exec-all.h"
>  #include "exec/helper-proto.h"
> @@ -757,6 +758,31 @@ static void ppc_hash64_set_c(PowerPCCPU *cpu, hwaddr ptex, uint64_t pte1)
>      stb_phys(CPU(cpu)->as, base + offset, (pte1 & 0xff) | 0x80);
>  }
>  
> +static target_ulong rmls_limit(PowerPCCPU *cpu)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    /*
> +     * This is the full 4 bits encoding of POWER8. Previous
> +     * CPUs only support a subset of these but the filtering
> +     * is done when writing LPCR.
> +     *
> +     * Unsupported values mean the OS has shot itself in the
> +     * foot. Return a 0-sized RMA in this case, which we expect
> +     * to trigger an immediate DSI or ISI
> +     */
> +    static const target_ulong rma_sizes[16] = {
> +        [1] = 16 * GiB,
> +        [2] = 1 * GiB,
> +        [3] = 64 * MiB,
> +        [4] = 256 * MiB,
> +        [7] = 128 * MiB,
> +        [8] = 32 * MiB,
> +    };
> +    target_ulong rmls = (env->spr[SPR_LPCR] & LPCR_RMLS) >> LPCR_RMLS_SHIFT;
> +
> +    return rma_sizes[rmls];
> +}
> +
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>                                  int rwx, int mmu_idx)
>  {
> @@ -1006,41 +1032,6 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
>      cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
>  }
>  
> -static void ppc_hash64_update_rmls(PowerPCCPU *cpu)
> -{
> -    CPUPPCState *env = &cpu->env;
> -    uint64_t lpcr = env->spr[SPR_LPCR];
> -
> -    /*
> -     * This is the full 4 bits encoding of POWER8. Previous
> -     * CPUs only support a subset of these but the filtering
> -     * is done when writing LPCR
> -     */
> -    switch ((lpcr & LPCR_RMLS) >> LPCR_RMLS_SHIFT) {
> -    case 0x8: /* 32MB */
> -        env->rmls = 0x2000000ull;
> -        break;
> -    case 0x3: /* 64MB */
> -        env->rmls = 0x4000000ull;
> -        break;
> -    case 0x7: /* 128MB */
> -        env->rmls = 0x8000000ull;
> -        break;
> -    case 0x4: /* 256MB */
> -        env->rmls = 0x10000000ull;
> -        break;
> -    case 0x2: /* 1GB */
> -        env->rmls = 0x40000000ull;
> -        break;
> -    case 0x1: /* 16GB */
> -        env->rmls = 0x400000000ull;
> -        break;
> -    default:
> -        /* What to do here ??? */
> -        env->rmls = 0;
> -    }
> -}
> -
>  static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>  {
>      CPUPPCState *env = &cpu->env;
> @@ -1099,7 +1090,7 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>      CPUPPCState *env = &cpu->env;
>  
>      env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> -    ppc_hash64_update_rmls(cpu);
> +    env->rmls = rmls_limit(cpu);
>      ppc_hash64_update_vrma(cpu);
>  }
>  



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand
  2020-03-03  3:43 ` [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand David Gibson
@ 2020-03-03  8:57   ` Greg Kurz
  2020-03-05  8:48   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Greg Kurz @ 2020-03-03  8:57 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:44 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> When the LPCR is written, we update the env->rmls field with the RMA limit
> it implies.  Simplify things by just calculating the value directly from
> the LPCR value when we need it.
> 
> It's possible this is a little slower, but it's unlikely to be significant,
> since this is only for real mode accesses in a translation configuration
> that's not used very often, and the whole thing is behind the qemu TLB
> anyway.  Therefore, keeping the number of state variables down and not
> having to worry about making sure it's always in sync seems the better
> option.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/cpu.h        | 1 -
>  target/ppc/mmu-hash64.c | 9 ++++++---
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 8077fdb068..f9871b1233 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1046,7 +1046,6 @@ struct CPUPPCState {
>      uint64_t insns_flags2;
>  #if defined(TARGET_PPC64)
>      ppc_slb_t vrma_slb;
> -    target_ulong rmls;
>  #endif
>  
>      int error_code;
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index fcccaabb88..4fd7b7ee74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -837,8 +837,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>  
>              goto skip_slb_search;
>          } else {
> +            target_ulong limit = rmls_limit(cpu);
> +
>              /* Emulated old-style RMO mode, bounds check against RMLS */
> -            if (raddr >= env->rmls) {
> +            if (raddr >= limit) {
>                  if (rwx == 2) {
>                      ppc_hash64_set_isi(cs, SRR1_PROTFAULT);
>                  } else {
> @@ -1000,8 +1002,10 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>                  return -1;
>              }
>          } else {
> +            target_ulong limit = rmls_limit(cpu);
> +
>              /* Emulated old-style RMO mode, bounds check against RMLS */
> -            if (raddr >= env->rmls) {
> +            if (raddr >= limit) {
>                  return -1;
>              }
>              return raddr | env->spr[SPR_RMOR];
> @@ -1091,7 +1095,6 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>      CPUPPCState *env = &cpu->env;
>  
>      env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> -    env->rmls = rmls_limit(cpu);
>      ppc_hash64_update_vrma(cpu);
>  }
>  



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently
  2020-03-03  3:43 ` [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently David Gibson
@ 2020-03-03  9:37   ` Greg Kurz
  2020-03-05  9:17   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Greg Kurz @ 2020-03-03  9:37 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:45 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> Currently, we construct the SLBE used for VRMA translations when the LPCR
> is written (which controls some bits in the SLBE), then use it later for
> translations.
> 
> This is a bit complex and confusing - simplify it by simply constructing
> the SLBE directly from the LPCR when we need it.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/cpu.h        |  3 --
>  target/ppc/mmu-hash64.c | 92 ++++++++++++++++-------------------------
>  2 files changed, 35 insertions(+), 60 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index f9871b1233..5a55fb02bd 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1044,9 +1044,6 @@ struct CPUPPCState {
>      uint32_t flags;
>      uint64_t insns_flags;
>      uint64_t insns_flags2;
> -#if defined(TARGET_PPC64)
> -    ppc_slb_t vrma_slb;
> -#endif
>  
>      int error_code;
>      uint32_t pending_interrupts;
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 4fd7b7ee74..34f6009b1e 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -784,11 +784,41 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
>      return rma_sizes[rmls];
>  }
>  
> +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    target_ulong lpcr = env->spr[SPR_LPCR];
> +    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> +    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
> +    int i;
> +
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
> +
> +        if (!sps->page_shift) {
> +            break;
> +        }
> +
> +        if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
> +            slb->esid = SLB_ESID_V;
> +            slb->vsid = vsid;
> +            slb->sps = sps;
> +            return 0;
> +        }
> +    }
> +
> +    error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
> +                 TARGET_FMT_lx"\n", lpcr);
> +
> +    return -1;
> +}
> +
>  int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>                                  int rwx, int mmu_idx)
>  {
>      CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
> +    ppc_slb_t vrma_slbe;
>      ppc_slb_t *slb;
>      unsigned apshift;
>      hwaddr ptex;
> @@ -827,8 +857,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>              }
>          } else if (ppc_hash64_use_vrma(env)) {
>              /* Emulated VRMA mode */
> -            slb = &env->vrma_slb;
> -            if (!slb->sps) {
> +            slb = &vrma_slbe;
> +            if (build_vrma_slbe(cpu, slb) != 0) {
>                  /* Invalid VRMA setup, machine check */
>                  cs->exception_index = POWERPC_EXCP_MCHECK;
>                  env->error_code = 0;
> @@ -976,6 +1006,7 @@ skip_slb_search:
>  hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>  {
>      CPUPPCState *env = &cpu->env;
> +    ppc_slb_t vrma_slbe;
>      ppc_slb_t *slb;
>      hwaddr ptex, raddr;
>      ppc_hash_pte64_t pte;
> @@ -997,8 +1028,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>              return raddr | env->spr[SPR_HRMOR];
>          } else if (ppc_hash64_use_vrma(env)) {
>              /* Emulated VRMA mode */
> -            slb = &env->vrma_slb;
> -            if (!slb->sps) {
> +            slb = &vrma_slbe;
> +            if (build_vrma_slbe(cpu, slb) != 0) {
>                  return -1;
>              }
>          } else {
> @@ -1037,65 +1068,12 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
>      cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
>  }
>  
> -static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> -{
> -    CPUPPCState *env = &cpu->env;
> -    const PPCHash64SegmentPageSizes *sps = NULL;
> -    target_ulong esid, vsid, lpcr;
> -    ppc_slb_t *slb = &env->vrma_slb;
> -    uint32_t vrmasd;
> -    int i;
> -
> -    /* First clear it */
> -    slb->esid = slb->vsid = 0;
> -    slb->sps = NULL;
> -
> -    /* Is VRMA enabled ? */
> -    if (!ppc_hash64_use_vrma(env)) {
> -        return;
> -    }
> -
> -    /*
> -     * Make one up. Mostly ignore the ESID which will not be needed
> -     * for translation
> -     */
> -    lpcr = env->spr[SPR_LPCR];
> -    vsid = SLB_VSID_VRMA;
> -    vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> -    vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
> -    esid = SLB_ESID_V;
> -
> -    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> -        const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
> -
> -        if (!sps1->page_shift) {
> -            break;
> -        }
> -
> -        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> -            sps = sps1;
> -            break;
> -        }
> -    }
> -
> -    if (!sps) {
> -        error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
> -                     " vsid 0x"TARGET_FMT_lx, esid, vsid);
> -        return;
> -    }
> -
> -    slb->vsid = vsid;
> -    slb->esid = esid;
> -    slb->sps = sps;
> -}
> -
>  void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>  {
>      PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
>      CPUPPCState *env = &cpu->env;
>  
>      env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> -    ppc_hash64_update_vrma(cpu);
>  }
>  
>  void helper_store_lpcr(CPUPPCState *env, target_ulong val)



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 14/17] spapr: Don't attempt to clamp RMA to VRMA constraint
  2020-03-03  3:43 ` [PATCH v7 14/17] spapr: Don't attempt to clamp RMA to VRMA constraint David Gibson
@ 2020-03-03  9:55   ` Greg Kurz
  0 siblings, 0 replies; 40+ messages in thread
From: Greg Kurz @ 2020-03-03  9:55 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:48 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> The Real Mode Area (RMA) is the part of memory which a guest can access
> when in real (MMU off) mode.  Of course, for a guest under KVM, the MMU
> isn't really turned off, it's just in a special translation mode - Virtual
> Real Mode Area (VRMA) - which looks like real mode in guest mode.
> 
> The mechanics of how this works when using the hash MMU (HPT) put a
> constraint on the size of the RMA, which depends on the size of the
> HPT.  So, the latter part of spapr_setup_hpt_and_vrma() clamps the RMA
> we advertise to the guest based on this VRMA limit.
> 
> There are several things wrong with this:
>  1) spapr_setup_hpt_and_vrma() doesn't actually clamp, it takes the minimum
>     of Node 0 memory size and the VRMA limit.  That will *often* work the
>     same as clamping, but there can be other constraints on RMA size which
>     supersede Node 0 memory size.  We have real bugs caused by this
>     (currently worked around in the guest kernel)
>  2) Some callers of spapr_setup_hpt_and_vrma() are in a situation where
>     we're past the point that we can actually advertise an RMA limit to the
>     guest
>  3) But most fundamentally, the VRMA limit depends on host configuration
>     (page size) which shouldn't be visible to the guest, but this partially
>     exposes it.  This can cause problems with migration in certain edge
>     cases, although we will mostly get away with it.
> 
> In practice, this clamping is almost never applied anyway.  With 64kiB
> pages and the normal rules for sizing of the HPT, the theoretical VRMA
> limit will be 4x(guest memory size) and so never hit.  It will hit with
> 4kiB pages, where it will be (guest memory size)/4.  However all mainstream
> distro kernels for POWER have used a 64kiB page size for at least 10 years.
> 
> So, simply replace this logic with a check that the RMA we've calculated
> based only on guest visible configuration will fit within the host implied
> VRMA limit.  This can break if running HPT guests on a host kernel with
> 4kiB page size.  As noted that's very rare.  There also exist several
> possible workarounds:
>   * Change the host kernel to use 64kiB pages
>   * Use radix MMU (RPT) guests instead of HPT
>   * Use 64kiB hugepages on the host to back guest memory
>   * Increase the guest memory size so that the RMA hits one of the fixed
>     limits before the RMA limit.  This is relatively easy on POWER8 which
>     has a 16GiB limit, harder on POWER9 which has a 1TiB limit.
>   * Use a guest NUMA configuration which artificially constrains the RMA
>     within the VRMA limit (the RMA must always fit within Node 0).
> 
> Previously, on KVM, we also temporarily reduced the rma_size to 256M so
> that the we'd load the kernel and initrd safely, regardless of the VRMA
> limit.  This was a) confusing, b) could significantly limit the size of
> images we could load and c) introduced a behavioural difference between
> KVM and TCG.  So we remove that as well.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c         | 28 ++++++++++------------------
>  hw/ppc/spapr_hcall.c   |  4 ++--
>  include/hw/ppc/spapr.h |  3 +--
>  3 files changed, 13 insertions(+), 22 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 18bf4bc3de..ef7667455c 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1569,7 +1569,7 @@ void spapr_reallocate_hpt(SpaprMachineState *spapr, int shift,
>      spapr_set_all_lpcrs(0, LPCR_HR | LPCR_UPRT);
>  }
>  
> -void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr)
> +void spapr_setup_hpt(SpaprMachineState *spapr)
>  {
>      int hpt_shift;
>  
> @@ -1585,10 +1585,16 @@ void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr)
>      }
>      spapr_reallocate_hpt(spapr, hpt_shift, &error_fatal);
>  
> -    if (spapr->vrma_adjust) {
> +    if (kvm_enabled()) {
>          hwaddr vrma_limit = kvmppc_vrma_limit(spapr->htab_shift);
>  
> -        spapr->rma_size = MIN(spapr_node0_size(MACHINE(spapr)), vrma_limit);
> +        /* Check our RMA fits in the possible VRMA */
> +        if (vrma_limit < spapr->rma_size) {
> +            error_report("Unable to create %" HWADDR_PRIu
> +                         "MiB RMA (VRMA only allows %" HWADDR_PRIu "MiB",
> +                         spapr->rma_size / MiB, vrma_limit / MiB);
> +            exit(EXIT_FAILURE);
> +        }
>      }
>  }
>  
> @@ -1628,7 +1634,7 @@ static void spapr_machine_reset(MachineState *machine)
>          spapr->patb_entry = PATE1_GR;
>          spapr_set_all_lpcrs(LPCR_HR | LPCR_UPRT, LPCR_HR | LPCR_UPRT);
>      } else {
> -        spapr_setup_hpt_and_vrma(spapr);
> +        spapr_setup_hpt(spapr);
>      }
>  
>      qemu_devices_reset();
> @@ -2695,20 +2701,6 @@ static void spapr_machine_init(MachineState *machine)
>  
>      spapr->rma_size = node0_size;
>  
> -    /* With KVM, we don't actually know whether KVM supports an
> -     * unbounded RMA (PR KVM) or is limited by the hash table size
> -     * (HV KVM using VRMA), so we always assume the latter
> -     *
> -     * In that case, we also limit the initial allocations for RTAS
> -     * etc... to 256M since we have no way to know what the VRMA size
> -     * is going to be as it depends on the size of the hash table
> -     * which isn't determined yet.
> -     */
> -    if (kvm_enabled()) {
> -        spapr->vrma_adjust = 1;
> -        spapr->rma_size = MIN(spapr->rma_size, 0x10000000);
> -    }
> -
>      /* Actually we don't support unbounded RMA anymore since we added
>       * proper emulation of HV mode. The max we can get is 16G which
>       * also happens to be what we configure for PAPR mode so make sure
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index c2b3286625..40c86e91eb 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1458,7 +1458,7 @@ static void spapr_check_setup_free_hpt(SpaprMachineState *spapr,
>          spapr_free_hpt(spapr);
>      } else if (!(patbe_new & PATE1_GR)) {
>          /* RADIX->HASH || NOTHING->HASH : Allocate HPT */
> -        spapr_setup_hpt_and_vrma(spapr);
> +        spapr_setup_hpt(spapr);
>      }
>      return;
>  }
> @@ -1845,7 +1845,7 @@ static target_ulong h_client_architecture_support(PowerPCCPU *cpu,
>           * (because the guest isn't going to use radix) then set it up here. */
>          if ((spapr->patb_entry & PATE1_GR) && !guest_radix) {
>              /* legacy hash or new hash: */
> -            spapr_setup_hpt_and_vrma(spapr);
> +            spapr_setup_hpt(spapr);
>          }
>  
>          if (fdt_bufsize < sizeof(hdr)) {
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index a4216935a1..90dbc55931 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -156,7 +156,6 @@ struct SpaprMachineState {
>      SpaprPendingHpt *pending_hpt; /* in-progress resize */
>  
>      hwaddr rma_size;
> -    int vrma_adjust;
>      uint32_t fdt_size;
>      uint32_t fdt_initial_size;
>      void *fdt_blob;
> @@ -795,7 +794,7 @@ void *spapr_build_fdt(SpaprMachineState *spapr, bool reset, size_t space);
>  void spapr_events_init(SpaprMachineState *sm);
>  void spapr_dt_events(SpaprMachineState *sm, void *fdt);
>  void close_htab_fd(SpaprMachineState *spapr);
> -void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr);
> +void spapr_setup_hpt(SpaprMachineState *spapr);
>  void spapr_free_hpt(SpaprMachineState *spapr);
>  SpaprTceTable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn);
>  void spapr_tce_table_enable(SpaprTceTable *tcet,



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types
  2020-03-03  3:43 ` [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types David Gibson
@ 2020-03-03 10:02   ` Greg Kurz
  2020-03-05  8:45   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Greg Kurz @ 2020-03-03 10:02 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:49 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> In spapr_machine_init() we clamp the size of the RMA to 16GiB and the
> comment saying why doesn't make a whole lot of sense.  In fact, this was
> done because the real mode handling code elsewhere limited the RMA in TCG
> mode to the maximum value configurable in LPCR[RMLS], 16GiB.
> 
> But,
>  * Actually LPCR[RMLS] has been able to encode a 256GiB size for a very
>    long time, we just didn't implement it properly in the softmmu
>  * LPCR[RMLS] shouldn't really be relevant anyway, it only was because we
>    used to abuse the RMOR based translation mode in order to handle the
>    fact that we're not modelling the hypervisor parts of the cpu
> 
> We've now removed those limitations in the modelling so the 16GiB clamp no
> longer serves a function.  However, we can't just remove the limit
> universally: that would break migration to earlier qemu versions, where
> the 16GiB RMLS limit still applies, no matter how bad the reasons for it
> are.
> 
> So, we replace the 16GiB clamp, with a clamp to a limit defined in the
> machine type class.  We set it to 16 GiB for machine types 4.2 and earlier,
> but set it to 0 meaning unlimited for the new 5.0 machine type.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c         | 13 ++++++++-----
>  include/hw/ppc/spapr.h |  1 +
>  2 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index ef7667455c..95bda4a615 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2701,12 +2701,14 @@ static void spapr_machine_init(MachineState *machine)
>  
>      spapr->rma_size = node0_size;
>  
> -    /* Actually we don't support unbounded RMA anymore since we added
> -     * proper emulation of HV mode. The max we can get is 16G which
> -     * also happens to be what we configure for PAPR mode so make sure
> -     * we don't do anything bigger than that
> +    /*
> +     * Clamp the RMA size based on machine type.  This is for
> +     * migration compatibility with older qemu versions, which limited
> +     * the RMA size for complicated and mostly bad reasons.
>       */
> -    spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);
> +    if (smc->rma_limit) {
> +        spapr->rma_size = MIN(spapr->rma_size, smc->rma_limit);
> +    }
>  
>      if (spapr->rma_size > node0_size) {
>          error_report("Numa node 0 has to span the RMA (%#08"HWADDR_PRIx")",
> @@ -4598,6 +4600,7 @@ static void spapr_machine_4_2_class_options(MachineClass *mc)
>      compat_props_add(mc->compat_props, hw_compat_4_2, hw_compat_4_2_len);
>      smc->default_caps.caps[SPAPR_CAP_CCF_ASSIST] = SPAPR_CAP_OFF;
>      smc->default_caps.caps[SPAPR_CAP_FWNMI_MCE] = SPAPR_CAP_OFF;
> +    smc->rma_limit = 16 * GiB;
>      mc->nvdimm_supported = false;
>  }
>  
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 90dbc55931..2015e37ac5 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -126,6 +126,7 @@ struct SpaprMachineClass {
>      bool pre_4_1_migration; /* don't migrate hpt-max-page-size */
>      bool linux_pci_probe;
>      bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
> +    hwaddr rma_limit;          /* clamp the RMA to this size */
>  
>      void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
>                            uint64_t *buid, hwaddr *pio, 



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 16/17] spapr: Clean up RMA size calculation
  2020-03-03  3:43 ` [PATCH v7 16/17] spapr: Clean up RMA size calculation David Gibson
@ 2020-03-03 10:18   ` Greg Kurz
  2020-03-05  8:43   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Greg Kurz @ 2020-03-03 10:18 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:50 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> Move the calculation of the Real Mode Area (RMA) size into a helper
> function.  While we're there clean it up and correct it in a few ways:
>   * Add comments making it clearer where the various constraints come from
>   * Remove a pointless check that the RMA fits within Node 0 (we've just
>     clamped it so that it does)
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c | 60 ++++++++++++++++++++++++++++++--------------------
>  1 file changed, 36 insertions(+), 24 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 95bda4a615..2eb0d8f70d 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2648,6 +2648,41 @@ static PCIHostState *spapr_create_default_phb(void)
>      return PCI_HOST_BRIDGE(dev);
>  }
>  
> +static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
> +{
> +    MachineState *machine = MACHINE(spapr);
> +    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> +    hwaddr rma_size = machine->ram_size;
> +    hwaddr node0_size = spapr_node0_size(machine);
> +
> +    /* RMA has to fit in the first NUMA node */
> +    rma_size = MIN(rma_size, node0_size);
> +
> +    /*
> +     * VRMA access is via a special 1TiB SLB mapping, so the RMA can
> +     * never exceed that
> +     */
> +    rma_size = MIN(rma_size, 1 * TiB);
> +
> +    /*
> +     * Clamp the RMA size based on machine type.  This is for
> +     * migration compatibility with older qemu versions, which limited
> +     * the RMA size for complicated and mostly bad reasons.
> +     */
> +    if (smc->rma_limit) {
> +        rma_size = MIN(rma_size, smc->rma_limit);
> +    }
> +
> +    if (rma_size < MIN_RMA_SLOF) {
> +        error_setg(errp,
> +"pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
> +                   MIN_RMA_SLOF / MiB);
> +        return 0;
> +    }
> +
> +    return rma_size;
> +}
> +
>  /* pSeries LPAR / sPAPR hardware init */
>  static void spapr_machine_init(MachineState *machine)
>  {
> @@ -2659,7 +2694,6 @@ static void spapr_machine_init(MachineState *machine)
>      PCIHostState *phb;
>      int i;
>      MemoryRegion *sysmem = get_system_memory();
> -    hwaddr node0_size = spapr_node0_size(machine);
>      long load_limit, fw_size;
>      char *filename;
>      Error *resize_hpt_err = NULL;
> @@ -2699,22 +2733,7 @@ static void spapr_machine_init(MachineState *machine)
>          exit(1);
>      }
>  
> -    spapr->rma_size = node0_size;
> -
> -    /*
> -     * Clamp the RMA size based on machine type.  This is for
> -     * migration compatibility with older qemu versions, which limited
> -     * the RMA size for complicated and mostly bad reasons.
> -     */
> -    if (smc->rma_limit) {
> -        spapr->rma_size = MIN(spapr->rma_size, smc->rma_limit);
> -    }
> -
> -    if (spapr->rma_size > node0_size) {
> -        error_report("Numa node 0 has to span the RMA (%#08"HWADDR_PRIx")",
> -                     spapr->rma_size);
> -        exit(1);
> -    }
> +    spapr->rma_size = spapr_rma_size(spapr, &error_fatal);
>  
>      /* Setup a load limit for the ramdisk leaving room for SLOF and FDT */
>      load_limit = MIN(spapr->rma_size, RTAS_MAX_ADDR) - FW_OVERHEAD;
> @@ -2951,13 +2970,6 @@ static void spapr_machine_init(MachineState *machine)
>          }
>      }
>  
> -    if (spapr->rma_size < MIN_RMA_SLOF) {
> -        error_report(
> -            "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
> -            MIN_RMA_SLOF / MiB);
> -        exit(1);
> -    }
> -
>      if (kernel_filename) {
>          uint64_t lowaddr = 0;
>  



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller
  2020-03-03  3:43 ` [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller David Gibson
@ 2020-03-03 10:32   ` Greg Kurz
  2020-03-04  1:25     ` David Gibson
  0 siblings, 1 reply; 40+ messages in thread
From: Greg Kurz @ 2020-03-03 10:32 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

On Tue,  3 Mar 2020 14:43:51 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> The Real Mode Area (RMA) needs to fit within the NUMA node owning memory
> at address 0.  That's usually node 0, but can be a later one if there are
> some nodes which have no memory (only CPUs).
> 
> This is currently handled by the spapr_node0_size() helper.  It has only
> one caller, so there's not a lot of point splitting it out.  It's also
> extremely easy to misread the code as clamping to the size of the smallest
> node rather than the first node with any memory.
> 
> So, fold it into the caller, and add some commentary to make it a bit
> clearer exactly what it's doing.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/ppc/spapr.c | 37 +++++++++++++++++++++----------------
>  1 file changed, 21 insertions(+), 16 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 2eb0d8f70d..d674a9f48f 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -296,20 +296,6 @@ static void spapr_populate_pa_features(SpaprMachineState *spapr,
>      _FDT((fdt_setprop(fdt, offset, "ibm,pa-features", pa_features, pa_size)));
>  }
>  
> -static hwaddr spapr_node0_size(MachineState *machine)
> -{
> -    if (machine->numa_state->num_nodes) {
> -        int i;
> -        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
> -            if (machine->numa_state->nodes[i].node_mem) {
> -                return MIN(pow2floor(machine->numa_state->nodes[i].node_mem),
> -                           machine->ram_size);
> -            }
> -        }
> -    }
> -    return machine->ram_size;
> -}
> -
>  static void add_str(GString *s, const gchar *s1)
>  {
>      g_string_append_len(s, s1, strlen(s1) + 1);
> @@ -2653,10 +2639,24 @@ static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
>      MachineState *machine = MACHINE(spapr);
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      hwaddr rma_size = machine->ram_size;
> -    hwaddr node0_size = spapr_node0_size(machine);
>  
>      /* RMA has to fit in the first NUMA node */
> -    rma_size = MIN(rma_size, node0_size);
> +    if (machine->numa_state->num_nodes) {
> +        /*
> +         * It's possible for there to be some zero-memory nodes first
> +         * in the list.  We need the RMA to fit inside the memory of
> +         * the first node which actually has some memory.
> +         */
> +        int i;
> +
> +        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
> +            if (machine->numa_state->nodes[i].node_mem != 0) {
> +                rma_size = MIN(rma_size,
> +                               machine->numa_state->nodes[i].node_mem);
> +                break;
> +            }
> +        }
> +    }
>  
>      /*
>       * VRMA access is via a special 1TiB SLB mapping, so the RMA can
> @@ -2673,6 +2673,11 @@ static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
>          rma_size = MIN(rma_size, smc->rma_limit);
>      }
>  
> +    /*
> +     * RMA size must be a power of 2
> +     */
> +    rma_size = pow2floor(rma_size);
> +

I saw somewhere else that the reason behind this might be
related to:

https://git.qemu.org/?p=qemu.git;a=commitdiff;h=6010818c30ce9c

commit 6010818c30ce9c796b4e22fd261fc6fea1cecbfc
Author: Alexey Kardashevskiy <aik@ozlabs.ru>
Date:   Thu Jul 3 13:10:05 2014 +1000

    spapr: Split memory nodes to power-of-two blocks

Is this the reason ?

In any case, it would probably help to mention somewhere
why the rounding is introduced by this patch.

>      if (rma_size < MIN_RMA_SLOF) {
>          error_setg(errp,
>  "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller
  2020-03-03 10:32   ` Greg Kurz
@ 2020-03-04  1:25     ` David Gibson
  0 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-04  1:25 UTC (permalink / raw)
  To: Greg Kurz
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Paolo Bonzini,
	qemu-ppc, clg, Igor Mammedov, Edgar E. Iglesias, paulus

[-- Attachment #1: Type: text/plain, Size: 4542 bytes --]

On Tue, Mar 03, 2020 at 11:32:49AM +0100, Greg Kurz wrote:
> On Tue,  3 Mar 2020 14:43:51 +1100
> David Gibson <david@gibson.dropbear.id.au> wrote:
> 
> > The Real Mode Area (RMA) needs to fit within the NUMA node owning memory
> > at address 0.  That's usually node 0, but can be a later one if there are
> > some nodes which have no memory (only CPUs).
> > 
> > This is currently handled by the spapr_node0_size() helper.  It has only
> > one caller, so there's not a lot of point splitting it out.  It's also
> > extremely easy to misread the code as clamping to the size of the smallest
> > node rather than the first node with any memory.
> > 
> > So, fold it into the caller, and add some commentary to make it a bit
> > clearer exactly what it's doing.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  hw/ppc/spapr.c | 37 +++++++++++++++++++++----------------
> >  1 file changed, 21 insertions(+), 16 deletions(-)
> > 
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 2eb0d8f70d..d674a9f48f 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -296,20 +296,6 @@ static void spapr_populate_pa_features(SpaprMachineState *spapr,
> >      _FDT((fdt_setprop(fdt, offset, "ibm,pa-features", pa_features, pa_size)));
> >  }
> >  
> > -static hwaddr spapr_node0_size(MachineState *machine)
> > -{
> > -    if (machine->numa_state->num_nodes) {
> > -        int i;
> > -        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
> > -            if (machine->numa_state->nodes[i].node_mem) {
> > -                return MIN(pow2floor(machine->numa_state->nodes[i].node_mem),
> > -                           machine->ram_size);
> > -            }
> > -        }
> > -    }
> > -    return machine->ram_size;
> > -}
> > -
> >  static void add_str(GString *s, const gchar *s1)
> >  {
> >      g_string_append_len(s, s1, strlen(s1) + 1);
> > @@ -2653,10 +2639,24 @@ static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
> >      MachineState *machine = MACHINE(spapr);
> >      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> >      hwaddr rma_size = machine->ram_size;
> > -    hwaddr node0_size = spapr_node0_size(machine);
> >  
> >      /* RMA has to fit in the first NUMA node */
> > -    rma_size = MIN(rma_size, node0_size);
> > +    if (machine->numa_state->num_nodes) {
> > +        /*
> > +         * It's possible for there to be some zero-memory nodes first
> > +         * in the list.  We need the RMA to fit inside the memory of
> > +         * the first node which actually has some memory.
> > +         */
> > +        int i;
> > +
> > +        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
> > +            if (machine->numa_state->nodes[i].node_mem != 0) {
> > +                rma_size = MIN(rma_size,
> > +                               machine->numa_state->nodes[i].node_mem);
> > +                break;
> > +            }
> > +        }
> > +    }
> >  
> >      /*
> >       * VRMA access is via a special 1TiB SLB mapping, so the RMA can
> > @@ -2673,6 +2673,11 @@ static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
> >          rma_size = MIN(rma_size, smc->rma_limit);
> >      }
> >  
> > +    /*
> > +     * RMA size must be a power of 2
> > +     */
> > +    rma_size = pow2floor(rma_size);
> > +
> 
> I saw somewhere else that the reason behind this might be
> related to:
> 
> https://git.qemu.org/?p=qemu.git;a=commitdiff;h=6010818c30ce9c
> 
> commit 6010818c30ce9c796b4e22fd261fc6fea1cecbfc
> Author: Alexey Kardashevskiy <aik@ozlabs.ru>
> Date:   Thu Jul 3 13:10:05 2014 +1000
> 
>     spapr: Split memory nodes to power-of-two blocks
> 
> Is this the reason ?

Quite likely.

> In any case, it would probably help to mention somewhere
> why the rounding is introduced by this patch.

Drat.  I meant to sort out your comment on the last spin better than
this, but got part way through and forgot what I was doing.

I'm going to merge everything except this last patch into ppc-for-5.0
now, and try to sort out this one a bit later.

> 
> >      if (rma_size < MIN_RMA_SLOF) {
> >          error_setg(errp,
> >  "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 12/17] spapr: Don't use weird units for MIN_RMA_SLOF
  2020-03-03  3:43 ` [PATCH v7 12/17] spapr: Don't use weird units for MIN_RMA_SLOF David Gibson
@ 2020-03-05  8:39   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:39 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

(sorry I missed that due to the change of series cover subject, I was 
tracking "Fixes for RMA size calculation​")

On 3/3/20 4:43 AM, David Gibson wrote:
> MIN_RMA_SLOF records the minimum about of RMA that the SLOF firmware
> requires.  It lets us give a meaningful error if the RMA ends up too small,
> rather than just letting SLOF crash.
> 
> It's currently stored as a number of megabytes, which is strange for global
> constants.  Move that megabyte scaling into the definition of the constant
> like most other things use.
> 
> Change from M to MiB in the associated message while we're at it.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> ---
>   hw/ppc/spapr.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index cc10798be4..510494ad87 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -103,7 +103,7 @@
>   #define FW_OVERHEAD             0x2800000
>   #define KERNEL_LOAD_ADDR        FW_MAX_SIZE
>   
> -#define MIN_RMA_SLOF            128UL
> +#define MIN_RMA_SLOF            (128 * MiB)
>   
>   #define PHANDLE_INTC            0x00001111
>   
> @@ -2956,10 +2956,10 @@ static void spapr_machine_init(MachineState *machine)
>           }
>       }
>   
> -    if (spapr->rma_size < (MIN_RMA_SLOF * MiB)) {
> +    if (spapr->rma_size < MIN_RMA_SLOF) {
>           error_report(
> -            "pSeries SLOF firmware requires >= %ldM guest RMA (Real Mode Area memory)",
> -            MIN_RMA_SLOF);
> +            "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
> +            MIN_RMA_SLOF / MiB);
>           exit(1);
>       }
>   
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 16/17] spapr: Clean up RMA size calculation
  2020-03-03  3:43 ` [PATCH v7 16/17] spapr: Clean up RMA size calculation David Gibson
  2020-03-03 10:18   ` Greg Kurz
@ 2020-03-05  8:43   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:43 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> Move the calculation of the Real Mode Area (RMA) size into a helper
> function.  While we're there clean it up and correct it in a few ways:
>    * Add comments making it clearer where the various constraints come from
>    * Remove a pointless check that the RMA fits within Node 0 (we've just
>      clamped it so that it does)
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   hw/ppc/spapr.c | 60 ++++++++++++++++++++++++++++++--------------------
>   1 file changed, 36 insertions(+), 24 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 95bda4a615..2eb0d8f70d 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2648,6 +2648,41 @@ static PCIHostState *spapr_create_default_phb(void)
>       return PCI_HOST_BRIDGE(dev);
>   }
>   
> +static hwaddr spapr_rma_size(SpaprMachineState *spapr, Error **errp)
> +{
> +    MachineState *machine = MACHINE(spapr);
> +    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> +    hwaddr rma_size = machine->ram_size;
> +    hwaddr node0_size = spapr_node0_size(machine);
> +
> +    /* RMA has to fit in the first NUMA node */
> +    rma_size = MIN(rma_size, node0_size);
> +
> +    /*
> +     * VRMA access is via a special 1TiB SLB mapping, so the RMA can
> +     * never exceed that
> +     */
> +    rma_size = MIN(rma_size, 1 * TiB);
> +
> +    /*
> +     * Clamp the RMA size based on machine type.  This is for
> +     * migration compatibility with older qemu versions, which limited
> +     * the RMA size for complicated and mostly bad reasons.
> +     */
> +    if (smc->rma_limit) {
> +        rma_size = MIN(rma_size, smc->rma_limit);
> +    }
> +
> +    if (rma_size < MIN_RMA_SLOF) {
> +        error_setg(errp,
> +"pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
> +                   MIN_RMA_SLOF / MiB);
> +        return 0;
> +    }
> +
> +    return rma_size;
> +}
> +
>   /* pSeries LPAR / sPAPR hardware init */
>   static void spapr_machine_init(MachineState *machine)
>   {
> @@ -2659,7 +2694,6 @@ static void spapr_machine_init(MachineState *machine)
>       PCIHostState *phb;
>       int i;
>       MemoryRegion *sysmem = get_system_memory();
> -    hwaddr node0_size = spapr_node0_size(machine);
>       long load_limit, fw_size;
>       char *filename;
>       Error *resize_hpt_err = NULL;
> @@ -2699,22 +2733,7 @@ static void spapr_machine_init(MachineState *machine)
>           exit(1);
>       }
>   
> -    spapr->rma_size = node0_size;
> -
> -    /*
> -     * Clamp the RMA size based on machine type.  This is for
> -     * migration compatibility with older qemu versions, which limited
> -     * the RMA size for complicated and mostly bad reasons.
> -     */
> -    if (smc->rma_limit) {
> -        spapr->rma_size = MIN(spapr->rma_size, smc->rma_limit);
> -    }
> -
> -    if (spapr->rma_size > node0_size) {
> -        error_report("Numa node 0 has to span the RMA (%#08"HWADDR_PRIx")",
> -                     spapr->rma_size);
> -        exit(1);
> -    }
> +    spapr->rma_size = spapr_rma_size(spapr, &error_fatal);
>   
>       /* Setup a load limit for the ramdisk leaving room for SLOF and FDT */
>       load_limit = MIN(spapr->rma_size, RTAS_MAX_ADDR) - FW_OVERHEAD;
> @@ -2951,13 +2970,6 @@ static void spapr_machine_init(MachineState *machine)
>           }
>       }
>   
> -    if (spapr->rma_size < MIN_RMA_SLOF) {
> -        error_report(
> -            "pSeries SLOF firmware requires >= %ldMiB guest RMA (Real Mode Area memory)",
> -            MIN_RMA_SLOF / MiB);
> -        exit(1);
> -    }
> -
>       if (kernel_filename) {
>           uint64_t lowaddr = 0;
>   
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types
  2020-03-03  3:43 ` [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types David Gibson
  2020-03-03 10:02   ` Greg Kurz
@ 2020-03-05  8:45   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:45 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> In spapr_machine_init() we clamp the size of the RMA to 16GiB and the
> comment saying why doesn't make a whole lot of sense.  In fact, this was
> done because the real mode handling code elsewhere limited the RMA in TCG
> mode to the maximum value configurable in LPCR[RMLS], 16GiB.
> 
> But,
>   * Actually LPCR[RMLS] has been able to encode a 256GiB size for a very
>     long time, we just didn't implement it properly in the softmmu
>   * LPCR[RMLS] shouldn't really be relevant anyway, it only was because we
>     used to abuse the RMOR based translation mode in order to handle the
>     fact that we're not modelling the hypervisor parts of the cpu
> 
> We've now removed those limitations in the modelling so the 16GiB clamp no
> longer serves a function.  However, we can't just remove the limit
> universally: that would break migration to earlier qemu versions, where
> the 16GiB RMLS limit still applies, no matter how bad the reasons for it
> are.
> 
> So, we replace the 16GiB clamp, with a clamp to a limit defined in the
> machine type class.  We set it to 16 GiB for machine types 4.2 and earlier,
> but set it to 0 meaning unlimited for the new 5.0 machine type.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   hw/ppc/spapr.c         | 13 ++++++++-----
>   include/hw/ppc/spapr.h |  1 +
>   2 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index ef7667455c..95bda4a615 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2701,12 +2701,14 @@ static void spapr_machine_init(MachineState *machine)
>   
>       spapr->rma_size = node0_size;
>   
> -    /* Actually we don't support unbounded RMA anymore since we added
> -     * proper emulation of HV mode. The max we can get is 16G which
> -     * also happens to be what we configure for PAPR mode so make sure
> -     * we don't do anything bigger than that
> +    /*
> +     * Clamp the RMA size based on machine type.  This is for
> +     * migration compatibility with older qemu versions, which limited
> +     * the RMA size for complicated and mostly bad reasons.
>        */
> -    spapr->rma_size = MIN(spapr->rma_size, 0x400000000ull);
> +    if (smc->rma_limit) {
> +        spapr->rma_size = MIN(spapr->rma_size, smc->rma_limit);
> +    }
>   
>       if (spapr->rma_size > node0_size) {
>           error_report("Numa node 0 has to span the RMA (%#08"HWADDR_PRIx")",
> @@ -4598,6 +4600,7 @@ static void spapr_machine_4_2_class_options(MachineClass *mc)
>       compat_props_add(mc->compat_props, hw_compat_4_2, hw_compat_4_2_len);
>       smc->default_caps.caps[SPAPR_CAP_CCF_ASSIST] = SPAPR_CAP_OFF;
>       smc->default_caps.caps[SPAPR_CAP_FWNMI_MCE] = SPAPR_CAP_OFF;
> +    smc->rma_limit = 16 * GiB;
>       mc->nvdimm_supported = false;
>   }
>   
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 90dbc55931..2015e37ac5 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -126,6 +126,7 @@ struct SpaprMachineClass {
>       bool pre_4_1_migration; /* don't migrate hpt-max-page-size */
>       bool linux_pci_probe;
>       bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
> +    hwaddr rma_limit;          /* clamp the RMA to this size */
>   
>       void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
>                             uint64_t *buid, hwaddr *pio,
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 13/17] spapr,ppc: Simplify signature of kvmppc_rma_size()
  2020-03-03  3:43 ` [PATCH v7 13/17] spapr,ppc: Simplify signature of kvmppc_rma_size() David Gibson
@ 2020-03-05  8:47   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:47 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, Cedric Le Goater,
	paulus, Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> This function calculates the maximum size of the RMA as implied by the
> host's page size of structure of the VRMA (there are a number of other
> constraints on the RMA size which will supersede this one in many
> circumstances).
> 
> The current interface takes the current RMA size estimate, and clamps it
> to the VRMA derived size.  The only current caller passes in an arguably
> wrong value (it will match the current RMA estimate in some but not all
> cases).
> 
> We want to fix that, but for now just keep concerns separated by having the
> KVM helper function just return the VRMA derived limit, and let the caller
> combine it with other constraints.  We call the new function
> kvmppc_vrma_limit() to more clearly indicate its limited responsibility.
> 
> The helper should only ever be called in the KVM enabled case, so replace
> its !CONFIG_KVM stub with an assert() rather than a dummy value.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cedric Le Goater <clg@fr.ibm.com>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
>   hw/ppc/spapr.c       | 5 +++--
>   target/ppc/kvm.c     | 5 ++---
>   target/ppc/kvm_ppc.h | 7 +++----
>   3 files changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 510494ad87..18bf4bc3de 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1586,8 +1586,9 @@ void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr)
>       spapr_reallocate_hpt(spapr, hpt_shift, &error_fatal);
>   
>       if (spapr->vrma_adjust) {
> -        spapr->rma_size = kvmppc_rma_size(spapr_node0_size(MACHINE(spapr)),
> -                                          spapr->htab_shift);
> +        hwaddr vrma_limit = kvmppc_vrma_limit(spapr->htab_shift);
> +
> +        spapr->rma_size = MIN(spapr_node0_size(MACHINE(spapr)), vrma_limit);
>       }
>   }
>   
> diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
> index 7f44b1aa1a..597f72be1b 100644
> --- a/target/ppc/kvm.c
> +++ b/target/ppc/kvm.c
> @@ -2113,7 +2113,7 @@ void kvmppc_error_append_smt_possible_hint(Error *const *errp)
>   
>   
>   #ifdef TARGET_PPC64
> -uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift)
> +uint64_t kvmppc_vrma_limit(unsigned int hash_shift)
>   {
>       struct kvm_ppc_smmu_info info;
>       long rampagesize, best_page_shift;
> @@ -2140,8 +2140,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift)
>           }
>       }
>   
> -    return MIN(current_size,
> -               1ULL << (best_page_shift + hash_shift - 7));
> +    return 1ULL << (best_page_shift + hash_shift - 7);
>   }
>   #endif
>   
> diff --git a/target/ppc/kvm_ppc.h b/target/ppc/kvm_ppc.h
> index 9e4f2357cc..332fa0aa1c 100644
> --- a/target/ppc/kvm_ppc.h
> +++ b/target/ppc/kvm_ppc.h
> @@ -47,7 +47,7 @@ void *kvmppc_create_spapr_tce(uint32_t liobn, uint32_t page_shift,
>                                 int *pfd, bool need_vfio);
>   int kvmppc_remove_spapr_tce(void *table, int pfd, uint32_t window_size);
>   int kvmppc_reset_htab(int shift_hint);
> -uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift);
> +uint64_t kvmppc_vrma_limit(unsigned int hash_shift);
>   bool kvmppc_has_cap_spapr_vfio(void);
>   #endif /* !CONFIG_USER_ONLY */
>   bool kvmppc_has_cap_epr(void);
> @@ -255,10 +255,9 @@ static inline int kvmppc_reset_htab(int shift_hint)
>       return 0;
>   }
>   
> -static inline uint64_t kvmppc_rma_size(uint64_t current_size,
> -                                       unsigned int hash_shift)
> +static inline uint64_t kvmppc_vrma_limit(unsigned int hash_shift)
>   {
> -    return ram_size;
> +    g_assert_not_reached();
>   }
>   
>   static inline bool kvmppc_hpt_needs_host_contiguous_pages(void)
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand
  2020-03-03  3:43 ` [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand David Gibson
  2020-03-03  8:57   ` Greg Kurz
@ 2020-03-05  8:48   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:48 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> When the LPCR is written, we update the env->rmls field with the RMA limit
> it implies.  Simplify things by just calculating the value directly from
> the LPCR value when we need it.
> 
> It's possible this is a little slower, but it's unlikely to be significant,
> since this is only for real mode accesses in a translation configuration
> that's not used very often, and the whole thing is behind the qemu TLB
> anyway.  Therefore, keeping the number of state variables down and not
> having to worry about making sure it's always in sync seems the better
> option.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   target/ppc/cpu.h        | 1 -
>   target/ppc/mmu-hash64.c | 9 ++++++---
>   2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 8077fdb068..f9871b1233 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1046,7 +1046,6 @@ struct CPUPPCState {
>       uint64_t insns_flags2;
>   #if defined(TARGET_PPC64)
>       ppc_slb_t vrma_slb;
> -    target_ulong rmls;
>   #endif
>   
>       int error_code;
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index fcccaabb88..4fd7b7ee74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -837,8 +837,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>   
>               goto skip_slb_search;
>           } else {
> +            target_ulong limit = rmls_limit(cpu);
> +
>               /* Emulated old-style RMO mode, bounds check against RMLS */
> -            if (raddr >= env->rmls) {
> +            if (raddr >= limit) {
>                   if (rwx == 2) {
>                       ppc_hash64_set_isi(cs, SRR1_PROTFAULT);
>                   } else {
> @@ -1000,8 +1002,10 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>                   return -1;
>               }
>           } else {
> +            target_ulong limit = rmls_limit(cpu);
> +
>               /* Emulated old-style RMO mode, bounds check against RMLS */
> -            if (raddr >= env->rmls) {
> +            if (raddr >= limit) {
>                   return -1;
>               }
>               return raddr | env->spr[SPR_RMOR];
> @@ -1091,7 +1095,6 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>       CPUPPCState *env = &cpu->env;
>   
>       env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> -    env->rmls = rmls_limit(cpu);
>       ppc_hash64_update_vrma(cpu);
>   }
>   
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS]
  2020-03-03  3:43 ` [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS] David Gibson
  2020-03-03  7:52   ` Greg Kurz
@ 2020-03-05  8:53   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:53 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> Currently we use a big switch statement in ppc_hash64_update_rmls() to work
> out what the right RMA limit is based on the LPCR[RMLS] field.  There's no
> formula for this - it's just an arbitrary mapping defined by the existing
> CPU implementations - but we can make it a bit more readable by using a
> lookup table rather than a switch.  In addition we can use the MiB/GiB
> symbols to make it a bit clearer.
> 
> While there we add a bit of clarity and rationale to the comment about
> what happens if the LPCR[RMLS] doesn't contain a valid value.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> ---
>   target/ppc/mmu-hash64.c | 63 ++++++++++++++++++-----------------------
>   1 file changed, 27 insertions(+), 36 deletions(-)
> 
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 0ef330a614..934989e6d9 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -18,6 +18,7 @@
>    * License along with this library; if not, see <http://www.gnu.org/licenses/>.
>    */
>   #include "qemu/osdep.h"
> +#include "qemu/units.h"
>   #include "cpu.h"
>   #include "exec/exec-all.h"
>   #include "exec/helper-proto.h"
> @@ -757,6 +758,31 @@ static void ppc_hash64_set_c(PowerPCCPU *cpu, hwaddr ptex, uint64_t pte1)
>       stb_phys(CPU(cpu)->as, base + offset, (pte1 & 0xff) | 0x80);
>   }
>   
> +static target_ulong rmls_limit(PowerPCCPU *cpu)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    /*
> +     * This is the full 4 bits encoding of POWER8. Previous
> +     * CPUs only support a subset of these but the filtering
> +     * is done when writing LPCR.
> +     *
> +     * Unsupported values mean the OS has shot itself in the
> +     * foot. Return a 0-sized RMA in this case, which we expect
> +     * to trigger an immediate DSI or ISI

Maybe use qemu_log(GUEST_ERROR) then? (as a follow-up patch).

> +     */
> +    static const target_ulong rma_sizes[16] = {
> +        [1] = 16 * GiB,
> +        [2] = 1 * GiB,
> +        [3] = 64 * MiB,
> +        [4] = 256 * MiB,
> +        [7] = 128 * MiB,
> +        [8] = 32 * MiB,
> +    };
> +    target_ulong rmls = (env->spr[SPR_LPCR] & LPCR_RMLS) >> LPCR_RMLS_SHIFT;
> +
> +    return rma_sizes[rmls];
> +}
> +
>   int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>                                   int rwx, int mmu_idx)
>   {
> @@ -1006,41 +1032,6 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
>       cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
>   }
>   
> -static void ppc_hash64_update_rmls(PowerPCCPU *cpu)
> -{
> -    CPUPPCState *env = &cpu->env;
> -    uint64_t lpcr = env->spr[SPR_LPCR];
> -
> -    /*
> -     * This is the full 4 bits encoding of POWER8. Previous
> -     * CPUs only support a subset of these but the filtering
> -     * is done when writing LPCR
> -     */
> -    switch ((lpcr & LPCR_RMLS) >> LPCR_RMLS_SHIFT) {
> -    case 0x8: /* 32MB */
> -        env->rmls = 0x2000000ull;
> -        break;
> -    case 0x3: /* 64MB */
> -        env->rmls = 0x4000000ull;
> -        break;
> -    case 0x7: /* 128MB */
> -        env->rmls = 0x8000000ull;
> -        break;
> -    case 0x4: /* 256MB */
> -        env->rmls = 0x10000000ull;
> -        break;
> -    case 0x2: /* 1GB */
> -        env->rmls = 0x40000000ull;
> -        break;
> -    case 0x1: /* 16GB */
> -        env->rmls = 0x400000000ull;
> -        break;
> -    default:
> -        /* What to do here ??? */
> -        env->rmls = 0;
> -    }

Good refactor.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> -}
> -
>   static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>   {
>       CPUPPCState *env = &cpu->env;
> @@ -1099,7 +1090,7 @@ void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>       CPUPPCState *env = &cpu->env;
>   
>       env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> -    ppc_hash64_update_rmls(cpu);
> +    env->rmls = rmls_limit(cpu);
>       ppc_hash64_update_vrma(cpu);
>   }
>   
> 



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking
  2020-03-03  3:43 ` [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking David Gibson
@ 2020-03-05  8:56   ` Philippe Mathieu-Daudé
  2020-03-10 10:06   ` Cédric Le Goater
  1 sibling, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:56 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> When we store the Logical Partitioning Control Register (LPCR) we have a
> big switch statement to work out which are valid bits for the cpu model
> we're emulating.
> 
> As well as being ugly, this isn't really conceptually correct, since it is
> based on the mmu_model variable, whereas the LPCR isn't (only) about the
> MMU, so mmu_model is basically just acting as a proxy for the cpu model.
> 
> Handle this in a simpler way, by adding a suitable lpcr_mask to the QOM
> class.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> ---
>   target/ppc/cpu-qom.h            |  1 +
>   target/ppc/mmu-hash64.c         | 36 ++-------------------------------
>   target/ppc/translate_init.inc.c | 27 +++++++++++++++++++++----
>   3 files changed, 26 insertions(+), 38 deletions(-)
> 
> diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
> index e499575dc8..15d6b54a7d 100644
> --- a/target/ppc/cpu-qom.h
> +++ b/target/ppc/cpu-qom.h
> @@ -177,6 +177,7 @@ typedef struct PowerPCCPUClass {
>       uint64_t insns_flags;
>       uint64_t insns_flags2;
>       uint64_t msr_mask;
> +    uint64_t lpcr_mask;         /* Available bits in the LPCR */
>       uint64_t lpcr_pm;           /* Power-saving mode Exit Cause Enable bits */
>       powerpc_mmu_t   mmu_model;
>       powerpc_excp_t  excp_model;
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index caf47ad6fc..0ef330a614 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -1095,42 +1095,10 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>   
>   void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>   {
> +    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
>       CPUPPCState *env = &cpu->env;
> -    uint64_t lpcr = 0;
>   
> -    /* Filter out bits */
> -    switch (env->mmu_model) {
> -    case POWERPC_MMU_2_03: /* P5p */
> -        lpcr = val & (LPCR_RMLS | LPCR_ILE |
> -                      LPCR_LPES0 | LPCR_LPES1 |
> -                      LPCR_RMI | LPCR_HDICE);
> -        break;
> -    case POWERPC_MMU_2_06: /* P7 */
> -        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
> -                      LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> -                      LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
> -                      LPCR_MER | LPCR_TC |
> -                      LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE);
> -        break;
> -    case POWERPC_MMU_2_07: /* P8 */
> -        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
> -                      LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> -                      LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
> -                      LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
> -                      LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE);
> -        break;
> -    case POWERPC_MMU_3_00: /* P9 */
> -        lpcr = val & (LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> -                      (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> -                      LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> -                      (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> -                      LPCR_DEE | LPCR_OEE)) | LPCR_MER | LPCR_GTSE | LPCR_TC |
> -                      LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE);
> -        break;
> -    default:
> -        g_assert_not_reached();
> -    }
> -    env->spr[SPR_LPCR] = lpcr;
> +    env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
>       ppc_hash64_update_rmls(cpu);
>       ppc_hash64_update_vrma(cpu);
>   }
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index f7acd3d61d..68aa4dfad8 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -8476,6 +8476,8 @@ POWERPC_FAMILY(POWER5P)(ObjectClass *oc, void *data)
>                       (1ull << MSR_DR) |
>                       (1ull << MSR_PMM) |
>                       (1ull << MSR_RI);
> +    pcc->lpcr_mask = LPCR_RMLS | LPCR_ILE | LPCR_LPES0 | LPCR_LPES1 |
> +        LPCR_RMI | LPCR_HDICE;
>       pcc->mmu_model = POWERPC_MMU_2_03;
>   #if defined(CONFIG_SOFTMMU)
>       pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> @@ -8614,6 +8616,12 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
>                       (1ull << MSR_PMM) |
>                       (1ull << MSR_RI) |
>                       (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
> +        LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> +        LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
> +        LPCR_MER | LPCR_TC |
> +        LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
>       pcc->mmu_model = POWERPC_MMU_2_06;
>   #if defined(CONFIG_SOFTMMU)
>       pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> @@ -8630,7 +8638,6 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
>       pcc->l1_dcache_size = 0x8000;
>       pcc->l1_icache_size = 0x8000;
>       pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
>   }
>   
>   static void init_proc_POWER8(CPUPPCState *env)
> @@ -8785,6 +8792,13 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
>                       (1ull << MSR_TS0) |
>                       (1ull << MSR_TS1) |
>                       (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
> +        LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> +        LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
> +        LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
> +        LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
> +                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
>       pcc->mmu_model = POWERPC_MMU_2_07;
>   #if defined(CONFIG_SOFTMMU)
>       pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> @@ -8802,8 +8816,6 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
>       pcc->l1_dcache_size = 0x8000;
>       pcc->l1_icache_size = 0x8000;
>       pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
> -                   LPCR_P8_PECE3 | LPCR_P8_PECE4;

Moving lpcr_pm in the same patch makes review slightly harder, anyway:
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

>   }
>   
>   #ifdef CONFIG_SOFTMMU
> @@ -8995,6 +9007,14 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
>                       (1ull << MSR_PMM) |
>                       (1ull << MSR_RI) |
>                       (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> +        (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> +        LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> +        (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> +                             LPCR_DEE | LPCR_OEE))
> +        | LPCR_MER | LPCR_GTSE | LPCR_TC |
> +        LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
>       pcc->mmu_model = POWERPC_MMU_3_00;
>   #if defined(CONFIG_SOFTMMU)
>       pcc->handle_mmu_fault = ppc64_v3_handle_mmu_fault;
> @@ -9014,7 +9034,6 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
>       pcc->l1_dcache_size = 0x8000;
>       pcc->l1_icache_size = 0x8000;
>       pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
>   }
>   
>   #ifdef CONFIG_SOFTMMU
> 



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode
  2020-03-03  3:43 ` [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode David Gibson
@ 2020-03-05  8:58   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:58 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> a4f30719a8cd, way back in 2007 noted that "PowerPC hypervisor mode is not
> fundamentally available only for PowerPC 64" and added a 32-bit version
> of the MSR[HV] bit.
> 
> But nothing was ever really done with that; there is no meaningful support
> for 32-bit hypervisor mode 13 years later.  Let's stop pretending and just
> remove the stubs.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> ---
>   target/ppc/cpu.h                | 21 +++++++--------------
>   target/ppc/translate_init.inc.c |  6 +++---
>   2 files changed, 10 insertions(+), 17 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index b283042515..8077fdb068 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -24,8 +24,6 @@
>   #include "exec/cpu-defs.h"
>   #include "cpu-qom.h"
>   
> -/* #define PPC_EMULATE_32BITS_HYPV */
> -
>   #define TCG_GUEST_DEFAULT_MO 0
>   
>   #define TARGET_PAGE_BITS_64K 16
> @@ -300,13 +298,12 @@ typedef struct ppc_v3_pate_t {
>   #define MSR_SF   63 /* Sixty-four-bit mode                            hflags */
>   #define MSR_TAG  62 /* Tag-active mode (POWERx ?)                            */
>   #define MSR_ISF  61 /* Sixty-four-bit interrupt mode on 630                  */
> -#define MSR_SHV  60 /* hypervisor state                               hflags */
> +#define MSR_HV   60 /* hypervisor state                               hflags */
>   #define MSR_TS0  34 /* Transactional state, 2 bits (Book3s)                  */
>   #define MSR_TS1  33
>   #define MSR_TM   32 /* Transactional Memory Available (Book3s)               */
>   #define MSR_CM   31 /* Computation mode for BookE                     hflags */
>   #define MSR_ICM  30 /* Interrupt computation mode for BookE                  */
> -#define MSR_THV  29 /* hypervisor state for 32 bits PowerPC           hflags */
>   #define MSR_GS   28 /* guest state for BookE                                 */
>   #define MSR_UCLE 26 /* User-mode cache lock enable for BookE                 */
>   #define MSR_VR   25 /* altivec available                            x hflags */
> @@ -401,10 +398,13 @@ typedef struct ppc_v3_pate_t {
>   
>   #define msr_sf   ((env->msr >> MSR_SF)   & 1)
>   #define msr_isf  ((env->msr >> MSR_ISF)  & 1)
> -#define msr_shv  ((env->msr >> MSR_SHV)  & 1)
> +#if defined(TARGET_PPC64)
> +#define msr_hv   ((env->msr >> MSR_HV)   & 1)
> +#else
> +#define msr_hv   (0)
> +#endif
>   #define msr_cm   ((env->msr >> MSR_CM)   & 1)
>   #define msr_icm  ((env->msr >> MSR_ICM)  & 1)
> -#define msr_thv  ((env->msr >> MSR_THV)  & 1)
>   #define msr_gs   ((env->msr >> MSR_GS)   & 1)
>   #define msr_ucle ((env->msr >> MSR_UCLE) & 1)
>   #define msr_vr   ((env->msr >> MSR_VR)   & 1)
> @@ -449,16 +449,9 @@ typedef struct ppc_v3_pate_t {
>   
>   /* Hypervisor bit is more specific */
>   #if defined(TARGET_PPC64)
> -#define MSR_HVB (1ULL << MSR_SHV)
> -#define msr_hv  msr_shv
> -#else
> -#if defined(PPC_EMULATE_32BITS_HYPV)
> -#define MSR_HVB (1ULL << MSR_THV)
> -#define msr_hv  msr_thv
> +#define MSR_HVB (1ULL << MSR_HV)
>   #else
>   #define MSR_HVB (0ULL)
> -#define msr_hv  (0)
> -#endif
>   #endif
>   
>   /* DSISR */
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index 2f7125c51f..df3401cf06 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -8764,7 +8764,7 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
>                           PPC2_ISA205 | PPC2_ISA207S | PPC2_FP_CVT_S64 |
>                           PPC2_TM | PPC2_PM_ISA206;
>       pcc->msr_mask = (1ull << MSR_SF) |
> -                    (1ull << MSR_SHV) |
> +                    (1ull << MSR_HV) |
>                       (1ull << MSR_TM) |
>                       (1ull << MSR_VR) |
>                       (1ull << MSR_VSX) |
> @@ -8976,7 +8976,7 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
>                           PPC2_ISA205 | PPC2_ISA207S | PPC2_FP_CVT_S64 |
>                           PPC2_TM | PPC2_ISA300 | PPC2_PRCNTL;
>       pcc->msr_mask = (1ull << MSR_SF) |
> -                    (1ull << MSR_SHV) |
> +                    (1ull << MSR_HV) |
>                       (1ull << MSR_TM) |
>                       (1ull << MSR_VR) |
>                       (1ull << MSR_VSX) |
> @@ -9186,7 +9186,7 @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
>                           PPC2_ISA205 | PPC2_ISA207S | PPC2_FP_CVT_S64 |
>                           PPC2_TM | PPC2_ISA300 | PPC2_PRCNTL;
>       pcc->msr_mask = (1ull << MSR_SF) |
> -                    (1ull << MSR_SHV) |
> +                    (1ull << MSR_HV) |
>                       (1ull << MSR_TM) |
>                       (1ull << MSR_VR) |
>                       (1ull << MSR_VSX) |
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 04/17] target/ppc: Introduce ppc_hash64_use_vrma() helper
  2020-03-03  3:43 ` [PATCH v7 04/17] target/ppc: Introduce ppc_hash64_use_vrma() helper David Gibson
@ 2020-03-05  8:59   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  8:59 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

On 3/3/20 4:43 AM, David Gibson wrote:
> When running guests under a hypervisor, the hypervisor obviously needs to
> be protected from guest accesses even if those are in what the guest
> considers real mode (translation off).  The POWER hardware provides two
> ways of doing that: The old way has guest real mode accesses simply offset
> and bounds checked into host addresses.  It works, but requires that a
> significant chunk of the guest's memory - the RMA - be physically
> contiguous in the host, which is pretty inconvenient.  The new way, known
> as VRMA, has guest real mode accesses translated in roughly the normal way
> but with some special parameters.
> 
> In POWER7 and POWER8 the LPCR[VPM0] bit selected between the two modes, but
> in POWER9 only VRMA mode is supported and LPCR[VPM0] no longer exists.  We
> handle that difference in behaviour in ppc_hash64_set_isi().. but not in
> other places that we blindly check LPCR[VPM0].
> 
> Correct those instances with a new helper to tell if we should be in VRMA
> mode.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> ---
>   target/ppc/mmu-hash64.c | 43 ++++++++++++++++++++---------------------
>   1 file changed, 21 insertions(+), 22 deletions(-)
> 
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 392f90e0ae..e372c42add 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -668,6 +668,21 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
>       return 0;
>   }
>   
> +static bool ppc_hash64_use_vrma(CPUPPCState *env)
> +{
> +    switch (env->mmu_model) {
> +    case POWERPC_MMU_3_00:
> +        /*
> +         * ISAv3.0 (POWER9) always uses VRMA, the VPM0 field and RMOR
> +         * register no longer exist
> +         */
> +        return true;
> +
> +    default:
> +        return !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> +    }
> +}
> +
>   static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
>   {
>       CPUPPCState *env = &POWERPC_CPU(cs)->env;
> @@ -676,15 +691,7 @@ static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
>       if (msr_ir) {
>           vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
>       } else {
> -        switch (env->mmu_model) {
> -        case POWERPC_MMU_3_00:
> -            /* Field deprecated in ISAv3.00 - interrupts always go to hyperv */
> -            vpm = true;
> -            break;
> -        default:
> -            vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> -            break;
> -        }
> +        vpm = ppc_hash64_use_vrma(env);
>       }
>       if (vpm && !msr_hv) {
>           cs->exception_index = POWERPC_EXCP_HISI;
> @@ -702,15 +709,7 @@ static void ppc_hash64_set_dsi(CPUState *cs, uint64_t dar, uint64_t dsisr)
>       if (msr_dr) {
>           vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
>       } else {
> -        switch (env->mmu_model) {
> -        case POWERPC_MMU_3_00:
> -            /* Field deprecated in ISAv3.00 - interrupts always go to hyperv */
> -            vpm = true;
> -            break;
> -        default:
> -            vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> -            break;
> -        }
> +        vpm = ppc_hash64_use_vrma(env);
>       }
>       if (vpm && !msr_hv) {
>           cs->exception_index = POWERPC_EXCP_HDSI;
> @@ -799,7 +798,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>               if (!(eaddr >> 63)) {
>                   raddr |= env->spr[SPR_HRMOR];
>               }
> -        } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> +        } else if (ppc_hash64_use_vrma(env)) {
>               /* Emulated VRMA mode */
>               slb = &env->vrma_slb;
>               if (!slb->sps) {
> @@ -967,7 +966,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>           } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
>               /* In HV mode, add HRMOR if top EA bit is clear */
>               return raddr | env->spr[SPR_HRMOR];
> -        } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> +        } else if (ppc_hash64_use_vrma(env)) {
>               /* Emulated VRMA mode */
>               slb = &env->vrma_slb;
>               if (!slb->sps) {
> @@ -1056,8 +1055,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>       slb->sps = NULL;
>   
>       /* Is VRMA enabled ? */
> -    lpcr = env->spr[SPR_LPCR];
> -    if (!(lpcr & LPCR_VPM0)) {
> +    if (!ppc_hash64_use_vrma(env)) {
>           return;
>       }
>   
> @@ -1065,6 +1063,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>        * Make one up. Mostly ignore the ESID which will not be needed
>        * for translation
>        */
> +    lpcr = env->spr[SPR_LPCR];
>       vsid = SLB_VSID_VRMA;
>       vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
>       vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
> 

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently
  2020-03-03  3:43 ` [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently David Gibson
  2020-03-03  9:37   ` Greg Kurz
@ 2020-03-05  9:17   ` Philippe Mathieu-Daudé
  2020-03-05  9:47     ` Greg Kurz
  1 sibling, 1 reply; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05  9:17 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, clg, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, Michael S. Tsirkin, aik,
	farosas, Mark Cave-Ayland, Igor Mammedov, paulus,
	Edgar E. Iglesias, Paolo Bonzini

Hi David,

On 3/3/20 4:43 AM, David Gibson wrote:
> Currently, we construct the SLBE used for VRMA translations when the LPCR
> is written (which controls some bits in the SLBE), then use it later for
> translations.
> 
> This is a bit complex and confusing - simplify it by simply constructing
> the SLBE directly from the LPCR when we need it.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   target/ppc/cpu.h        |  3 --
>   target/ppc/mmu-hash64.c | 92 ++++++++++++++++-------------------------
>   2 files changed, 35 insertions(+), 60 deletions(-)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index f9871b1233..5a55fb02bd 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1044,9 +1044,6 @@ struct CPUPPCState {
>       uint32_t flags;
>       uint64_t insns_flags;
>       uint64_t insns_flags2;
> -#if defined(TARGET_PPC64)
> -    ppc_slb_t vrma_slb;
> -#endif

I find it confusing to move this member to the stack, when other slb 
stay on the CPUState context on the heap.

Consider amending the following (I can also send a patch later if you 
prefer):

-- >8 --
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 5a55fb02bd..ade71c3592 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -988,6 +988,7 @@ struct CPUPPCState {
      /* MMU context, only relevant for full system emulation */
  #if defined(TARGET_PPC64)
      ppc_slb_t slb[MAX_SLB_ENTRIES]; /* PowerPC 64 SLB area */
+    ppc_slb_t vrma_slbe;
  #endif
      target_ulong sr[32];   /* segment registers */
      uint32_t nb_BATs;      /* number of BATs */
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 34f6009b1e..abbdd531c6 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -818,7 +818,6 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, 
vaddr eaddr,
  {
      CPUState *cs = CPU(cpu);
      CPUPPCState *env = &cpu->env;
-    ppc_slb_t vrma_slbe;
      ppc_slb_t *slb;
      unsigned apshift;
      hwaddr ptex;
@@ -857,7 +856,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, 
vaddr eaddr,
              }
          } else if (ppc_hash64_use_vrma(env)) {
              /* Emulated VRMA mode */
-            slb = &vrma_slbe;
+            slb = &env->vrma_slbe;
              if (build_vrma_slbe(cpu, slb) != 0) {
                  /* Invalid VRMA setup, machine check */
                  cs->exception_index = POWERPC_EXCP_MCHECK;
@@ -1006,7 +1005,6 @@ skip_slb_search:
  hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
  {
      CPUPPCState *env = &cpu->env;
-    ppc_slb_t vrma_slbe;
      ppc_slb_t *slb;
      hwaddr ptex, raddr;
      ppc_hash_pte64_t pte;
@@ -1028,7 +1026,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU 
*cpu, target_ulong addr)
              return raddr | env->spr[SPR_HRMOR];
          } else if (ppc_hash64_use_vrma(env)) {
              /* Emulated VRMA mode */
-            slb = &vrma_slbe;
+            slb = &env->vrma_slbe;
              if (build_vrma_slbe(cpu, slb) != 0) {
                  return -1;
              }
---

With the hunk amended I dare to give:
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

>   
>       int error_code;
>       uint32_t pending_interrupts;
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 4fd7b7ee74..34f6009b1e 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -784,11 +784,41 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
>       return rma_sizes[rmls];
>   }
>   
> +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    target_ulong lpcr = env->spr[SPR_LPCR];
> +    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> +    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
> +    int i;
> +
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
> +
> +        if (!sps->page_shift) {
> +            break;
> +        }
> +
> +        if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
> +            slb->esid = SLB_ESID_V;
> +            slb->vsid = vsid;
> +            slb->sps = sps;
> +            return 0;
> +        }
> +    }
> +
> +    error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
> +                 TARGET_FMT_lx"\n", lpcr);
> +
> +    return -1;
> +}
> +
>   int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>                                   int rwx, int mmu_idx)
>   {
>       CPUState *cs = CPU(cpu);
>       CPUPPCState *env = &cpu->env;
> +    ppc_slb_t vrma_slbe;
>       ppc_slb_t *slb;
>       unsigned apshift;
>       hwaddr ptex;
> @@ -827,8 +857,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>               }
>           } else if (ppc_hash64_use_vrma(env)) {
>               /* Emulated VRMA mode */
> -            slb = &env->vrma_slb;
> -            if (!slb->sps) {
> +            slb = &vrma_slbe;
> +            if (build_vrma_slbe(cpu, slb) != 0) {
>                   /* Invalid VRMA setup, machine check */
>                   cs->exception_index = POWERPC_EXCP_MCHECK;
>                   env->error_code = 0;
> @@ -976,6 +1006,7 @@ skip_slb_search:
>   hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>   {
>       CPUPPCState *env = &cpu->env;
> +    ppc_slb_t vrma_slbe;
>       ppc_slb_t *slb;
>       hwaddr ptex, raddr;
>       ppc_hash_pte64_t pte;
> @@ -997,8 +1028,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>               return raddr | env->spr[SPR_HRMOR];
>           } else if (ppc_hash64_use_vrma(env)) {
>               /* Emulated VRMA mode */
> -            slb = &env->vrma_slb;
> -            if (!slb->sps) {
> +            slb = &vrma_slbe;
> +            if (build_vrma_slbe(cpu, slb) != 0) {
>                   return -1;
>               }
>           } else {
> @@ -1037,65 +1068,12 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
>       cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
>   }
>   
> -static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> -{
> -    CPUPPCState *env = &cpu->env;
> -    const PPCHash64SegmentPageSizes *sps = NULL;
> -    target_ulong esid, vsid, lpcr;
> -    ppc_slb_t *slb = &env->vrma_slb;
> -    uint32_t vrmasd;
> -    int i;
> -
> -    /* First clear it */
> -    slb->esid = slb->vsid = 0;
> -    slb->sps = NULL;
> -
> -    /* Is VRMA enabled ? */
> -    if (!ppc_hash64_use_vrma(env)) {
> -        return;
> -    }
> -
> -    /*
> -     * Make one up. Mostly ignore the ESID which will not be needed
> -     * for translation
> -     */
> -    lpcr = env->spr[SPR_LPCR];
> -    vsid = SLB_VSID_VRMA;
> -    vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> -    vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
> -    esid = SLB_ESID_V;
> -
> -    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> -        const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
> -
> -        if (!sps1->page_shift) {
> -            break;
> -        }
> -
> -        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> -            sps = sps1;
> -            break;
> -        }
> -    }
> -
> -    if (!sps) {
> -        error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
> -                     " vsid 0x"TARGET_FMT_lx, esid, vsid);
> -        return;
> -    }
> -
> -    slb->vsid = vsid;
> -    slb->esid = esid;
> -    slb->sps = sps;
> -}
> -
>   void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>   {
>       PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
>       CPUPPCState *env = &cpu->env;
>   
>       env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> -    ppc_hash64_update_vrma(cpu);
>   }
>   
>   void helper_store_lpcr(CPUPPCState *env, target_ulong val)
> 



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently
  2020-03-05  9:17   ` Philippe Mathieu-Daudé
@ 2020-03-05  9:47     ` Greg Kurz
  2020-03-05 10:35       ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 40+ messages in thread
From: Greg Kurz @ 2020-03-05  9:47 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Igor Mammedov,
	qemu-ppc, clg, Edgar E. Iglesias, Paolo Bonzini, paulus,
	David Gibson

On Thu, 5 Mar 2020 10:17:03 +0100
Philippe Mathieu-Daudé <philmd@redhat.com> wrote:

> Hi David,
> 
> On 3/3/20 4:43 AM, David Gibson wrote:
> > Currently, we construct the SLBE used for VRMA translations when the LPCR
> > is written (which controls some bits in the SLBE), then use it later for
> > translations.
> > 
> > This is a bit complex and confusing - simplify it by simply constructing
> > the SLBE directly from the LPCR when we need it.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >   target/ppc/cpu.h        |  3 --
> >   target/ppc/mmu-hash64.c | 92 ++++++++++++++++-------------------------
> >   2 files changed, 35 insertions(+), 60 deletions(-)
> > 
> > diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> > index f9871b1233..5a55fb02bd 100644
> > --- a/target/ppc/cpu.h
> > +++ b/target/ppc/cpu.h
> > @@ -1044,9 +1044,6 @@ struct CPUPPCState {
> >       uint32_t flags;
> >       uint64_t insns_flags;
> >       uint64_t insns_flags2;
> > -#if defined(TARGET_PPC64)
> > -    ppc_slb_t vrma_slb;
> > -#endif
> 
> I find it confusing to move this member to the stack, when other slb 
> stay on the CPUState context on the heap.
> 

The "other slb" sit in CPUPPCState because it is genuine state,
while vrma_slb is something we can derive from some other state
when we need it. David's patch goes into the good direction of
not exposing anymore something that shouldn't sit in CPUPPCState
in the first place.

What is confusing for you ?

> Consider amending the following (I can also send a patch later if you 
> prefer):
> 
> -- >8 --
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 5a55fb02bd..ade71c3592 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -988,6 +988,7 @@ struct CPUPPCState {
>       /* MMU context, only relevant for full system emulation */
>   #if defined(TARGET_PPC64)
>       ppc_slb_t slb[MAX_SLB_ENTRIES]; /* PowerPC 64 SLB area */
> +    ppc_slb_t vrma_slbe;
>   #endif
>       target_ulong sr[32];   /* segment registers */
>       uint32_t nb_BATs;      /* number of BATs */
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..abbdd531c6 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -818,7 +818,6 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, 
> vaddr eaddr,
>   {
>       CPUState *cs = CPU(cpu);
>       CPUPPCState *env = &cpu->env;
> -    ppc_slb_t vrma_slbe;
>       ppc_slb_t *slb;
>       unsigned apshift;
>       hwaddr ptex;
> @@ -857,7 +856,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, 
> vaddr eaddr,
>               }
>           } else if (ppc_hash64_use_vrma(env)) {
>               /* Emulated VRMA mode */
> -            slb = &vrma_slbe;
> +            slb = &env->vrma_slbe;
>               if (build_vrma_slbe(cpu, slb) != 0) {
>                   /* Invalid VRMA setup, machine check */
>                   cs->exception_index = POWERPC_EXCP_MCHECK;
> @@ -1006,7 +1005,6 @@ skip_slb_search:
>   hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>   {
>       CPUPPCState *env = &cpu->env;
> -    ppc_slb_t vrma_slbe;
>       ppc_slb_t *slb;
>       hwaddr ptex, raddr;
>       ppc_hash_pte64_t pte;
> @@ -1028,7 +1026,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU 
> *cpu, target_ulong addr)
>               return raddr | env->spr[SPR_HRMOR];
>           } else if (ppc_hash64_use_vrma(env)) {
>               /* Emulated VRMA mode */
> -            slb = &vrma_slbe;
> +            slb = &env->vrma_slbe;
>               if (build_vrma_slbe(cpu, slb) != 0) {
>                   return -1;
>               }
> ---
> 
> With the hunk amended I dare to give:
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> 
> >   
> >       int error_code;
> >       uint32_t pending_interrupts;
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 4fd7b7ee74..34f6009b1e 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -784,11 +784,41 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
> >       return rma_sizes[rmls];
> >   }
> >   
> > +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
> > +{
> > +    CPUPPCState *env = &cpu->env;
> > +    target_ulong lpcr = env->spr[SPR_LPCR];
> > +    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> > +    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
> > +    int i;
> > +
> > +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> > +        const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
> > +
> > +        if (!sps->page_shift) {
> > +            break;
> > +        }
> > +
> > +        if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
> > +            slb->esid = SLB_ESID_V;
> > +            slb->vsid = vsid;
> > +            slb->sps = sps;
> > +            return 0;
> > +        }
> > +    }
> > +
> > +    error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
> > +                 TARGET_FMT_lx"\n", lpcr);
> > +
> > +    return -1;
> > +}
> > +
> >   int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
> >                                   int rwx, int mmu_idx)
> >   {
> >       CPUState *cs = CPU(cpu);
> >       CPUPPCState *env = &cpu->env;
> > +    ppc_slb_t vrma_slbe;
> >       ppc_slb_t *slb;
> >       unsigned apshift;
> >       hwaddr ptex;
> > @@ -827,8 +857,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
> >               }
> >           } else if (ppc_hash64_use_vrma(env)) {
> >               /* Emulated VRMA mode */
> > -            slb = &env->vrma_slb;
> > -            if (!slb->sps) {
> > +            slb = &vrma_slbe;
> > +            if (build_vrma_slbe(cpu, slb) != 0) {
> >                   /* Invalid VRMA setup, machine check */
> >                   cs->exception_index = POWERPC_EXCP_MCHECK;
> >                   env->error_code = 0;
> > @@ -976,6 +1006,7 @@ skip_slb_search:
> >   hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
> >   {
> >       CPUPPCState *env = &cpu->env;
> > +    ppc_slb_t vrma_slbe;
> >       ppc_slb_t *slb;
> >       hwaddr ptex, raddr;
> >       ppc_hash_pte64_t pte;
> > @@ -997,8 +1028,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
> >               return raddr | env->spr[SPR_HRMOR];
> >           } else if (ppc_hash64_use_vrma(env)) {
> >               /* Emulated VRMA mode */
> > -            slb = &env->vrma_slb;
> > -            if (!slb->sps) {
> > +            slb = &vrma_slbe;
> > +            if (build_vrma_slbe(cpu, slb) != 0) {
> >                   return -1;
> >               }
> >           } else {
> > @@ -1037,65 +1068,12 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
> >       cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
> >   }
> >   
> > -static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> > -{
> > -    CPUPPCState *env = &cpu->env;
> > -    const PPCHash64SegmentPageSizes *sps = NULL;
> > -    target_ulong esid, vsid, lpcr;
> > -    ppc_slb_t *slb = &env->vrma_slb;
> > -    uint32_t vrmasd;
> > -    int i;
> > -
> > -    /* First clear it */
> > -    slb->esid = slb->vsid = 0;
> > -    slb->sps = NULL;
> > -
> > -    /* Is VRMA enabled ? */
> > -    if (!ppc_hash64_use_vrma(env)) {
> > -        return;
> > -    }
> > -
> > -    /*
> > -     * Make one up. Mostly ignore the ESID which will not be needed
> > -     * for translation
> > -     */
> > -    lpcr = env->spr[SPR_LPCR];
> > -    vsid = SLB_VSID_VRMA;
> > -    vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> > -    vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
> > -    esid = SLB_ESID_V;
> > -
> > -    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> > -        const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
> > -
> > -        if (!sps1->page_shift) {
> > -            break;
> > -        }
> > -
> > -        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> > -            sps = sps1;
> > -            break;
> > -        }
> > -    }
> > -
> > -    if (!sps) {
> > -        error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
> > -                     " vsid 0x"TARGET_FMT_lx, esid, vsid);
> > -        return;
> > -    }
> > -
> > -    slb->vsid = vsid;
> > -    slb->esid = esid;
> > -    slb->sps = sps;
> > -}
> > -
> >   void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
> >   {
> >       PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
> >       CPUPPCState *env = &cpu->env;
> >   
> >       env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> > -    ppc_hash64_update_vrma(cpu);
> >   }
> >   
> >   void helper_store_lpcr(CPUPPCState *env, target_ulong val)
> > 
> 



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently
  2020-03-05  9:47     ` Greg Kurz
@ 2020-03-05 10:35       ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 40+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-03-05 10:35 UTC (permalink / raw)
  To: Greg Kurz
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, Igor Mammedov,
	qemu-ppc, clg, Edgar E. Iglesias, Paolo Bonzini, paulus,
	David Gibson

On 3/5/20 10:47 AM, Greg Kurz wrote:
> On Thu, 5 Mar 2020 10:17:03 +0100
> Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
> 
>> Hi David,
>>
>> On 3/3/20 4:43 AM, David Gibson wrote:
>>> Currently, we construct the SLBE used for VRMA translations when the LPCR
>>> is written (which controls some bits in the SLBE), then use it later for
>>> translations.
>>>
>>> This is a bit complex and confusing - simplify it by simply constructing
>>> the SLBE directly from the LPCR when we need it.
>>>
>>> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
>>> ---
>>>    target/ppc/cpu.h        |  3 --
>>>    target/ppc/mmu-hash64.c | 92 ++++++++++++++++-------------------------
>>>    2 files changed, 35 insertions(+), 60 deletions(-)
>>>
>>> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
>>> index f9871b1233..5a55fb02bd 100644
>>> --- a/target/ppc/cpu.h
>>> +++ b/target/ppc/cpu.h
>>> @@ -1044,9 +1044,6 @@ struct CPUPPCState {
>>>        uint32_t flags;
>>>        uint64_t insns_flags;
>>>        uint64_t insns_flags2;
>>> -#if defined(TARGET_PPC64)
>>> -    ppc_slb_t vrma_slb;
>>> -#endif
>>
>> I find it confusing to move this member to the stack, when other slb
>> stay on the CPUState context on the heap.
>>
> 
> The "other slb" sit in CPUPPCState because it is genuine state,
> while vrma_slb is something we can derive from some other state
> when we need it. David's patch goes into the good direction of
> not exposing anymore something that shouldn't sit in CPUPPCState
> in the first place.

OK.

> 
> What is confusing for you ?

Now I'm fine, then I dare to give:
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> 
>> Consider amending the following (I can also send a patch later if you
>> prefer):
>>
>> -- >8 --
>> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
>> index 5a55fb02bd..ade71c3592 100644
>> --- a/target/ppc/cpu.h
>> +++ b/target/ppc/cpu.h
>> @@ -988,6 +988,7 @@ struct CPUPPCState {
>>        /* MMU context, only relevant for full system emulation */
>>    #if defined(TARGET_PPC64)
>>        ppc_slb_t slb[MAX_SLB_ENTRIES]; /* PowerPC 64 SLB area */
>> +    ppc_slb_t vrma_slbe;
>>    #endif
>>        target_ulong sr[32];   /* segment registers */
>>        uint32_t nb_BATs;      /* number of BATs */
>> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
>> index 34f6009b1e..abbdd531c6 100644
>> --- a/target/ppc/mmu-hash64.c
>> +++ b/target/ppc/mmu-hash64.c
>> @@ -818,7 +818,6 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu,
>> vaddr eaddr,
>>    {
>>        CPUState *cs = CPU(cpu);
>>        CPUPPCState *env = &cpu->env;
>> -    ppc_slb_t vrma_slbe;
>>        ppc_slb_t *slb;
>>        unsigned apshift;
>>        hwaddr ptex;
>> @@ -857,7 +856,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu,
>> vaddr eaddr,
>>                }
>>            } else if (ppc_hash64_use_vrma(env)) {
>>                /* Emulated VRMA mode */
>> -            slb = &vrma_slbe;
>> +            slb = &env->vrma_slbe;
>>                if (build_vrma_slbe(cpu, slb) != 0) {
>>                    /* Invalid VRMA setup, machine check */
>>                    cs->exception_index = POWERPC_EXCP_MCHECK;
>> @@ -1006,7 +1005,6 @@ skip_slb_search:
>>    hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>>    {
>>        CPUPPCState *env = &cpu->env;
>> -    ppc_slb_t vrma_slbe;
>>        ppc_slb_t *slb;
>>        hwaddr ptex, raddr;
>>        ppc_hash_pte64_t pte;
>> @@ -1028,7 +1026,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU
>> *cpu, target_ulong addr)
>>                return raddr | env->spr[SPR_HRMOR];
>>            } else if (ppc_hash64_use_vrma(env)) {
>>                /* Emulated VRMA mode */
>> -            slb = &vrma_slbe;
>> +            slb = &env->vrma_slbe;
>>                if (build_vrma_slbe(cpu, slb) != 0) {
>>                    return -1;
>>                }
>> ---
>>
>> With the hunk amended I dare to give:
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>
>>>    
>>>        int error_code;
>>>        uint32_t pending_interrupts;
>>> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
>>> index 4fd7b7ee74..34f6009b1e 100644
>>> --- a/target/ppc/mmu-hash64.c
>>> +++ b/target/ppc/mmu-hash64.c
>>> @@ -784,11 +784,41 @@ static target_ulong rmls_limit(PowerPCCPU *cpu)
>>>        return rma_sizes[rmls];
>>>    }
>>>    
>>> +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb)
>>> +{
>>> +    CPUPPCState *env = &cpu->env;
>>> +    target_ulong lpcr = env->spr[SPR_LPCR];
>>> +    uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
>>> +    target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK);
>>> +    int i;
>>> +
>>> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
>>> +        const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i];
>>> +
>>> +        if (!sps->page_shift) {
>>> +            break;
>>> +        }
>>> +
>>> +        if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) {
>>> +            slb->esid = SLB_ESID_V;
>>> +            slb->vsid = vsid;
>>> +            slb->sps = sps;
>>> +            return 0;
>>> +        }
>>> +    }
>>> +
>>> +    error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x"
>>> +                 TARGET_FMT_lx"\n", lpcr);
>>> +
>>> +    return -1;
>>> +}
>>> +
>>>    int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>>>                                    int rwx, int mmu_idx)
>>>    {
>>>        CPUState *cs = CPU(cpu);
>>>        CPUPPCState *env = &cpu->env;
>>> +    ppc_slb_t vrma_slbe;
>>>        ppc_slb_t *slb;
>>>        unsigned apshift;
>>>        hwaddr ptex;
>>> @@ -827,8 +857,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
>>>                }
>>>            } else if (ppc_hash64_use_vrma(env)) {
>>>                /* Emulated VRMA mode */
>>> -            slb = &env->vrma_slb;
>>> -            if (!slb->sps) {
>>> +            slb = &vrma_slbe;
>>> +            if (build_vrma_slbe(cpu, slb) != 0) {
>>>                    /* Invalid VRMA setup, machine check */
>>>                    cs->exception_index = POWERPC_EXCP_MCHECK;
>>>                    env->error_code = 0;
>>> @@ -976,6 +1006,7 @@ skip_slb_search:
>>>    hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>>>    {
>>>        CPUPPCState *env = &cpu->env;
>>> +    ppc_slb_t vrma_slbe;
>>>        ppc_slb_t *slb;
>>>        hwaddr ptex, raddr;
>>>        ppc_hash_pte64_t pte;
>>> @@ -997,8 +1028,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
>>>                return raddr | env->spr[SPR_HRMOR];
>>>            } else if (ppc_hash64_use_vrma(env)) {
>>>                /* Emulated VRMA mode */
>>> -            slb = &env->vrma_slb;
>>> -            if (!slb->sps) {
>>> +            slb = &vrma_slbe;
>>> +            if (build_vrma_slbe(cpu, slb) != 0) {
>>>                    return -1;
>>>                }
>>>            } else {
>>> @@ -1037,65 +1068,12 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex,
>>>        cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH;
>>>    }
>>>    
>>> -static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>>> -{
>>> -    CPUPPCState *env = &cpu->env;
>>> -    const PPCHash64SegmentPageSizes *sps = NULL;
>>> -    target_ulong esid, vsid, lpcr;
>>> -    ppc_slb_t *slb = &env->vrma_slb;
>>> -    uint32_t vrmasd;
>>> -    int i;
>>> -
>>> -    /* First clear it */
>>> -    slb->esid = slb->vsid = 0;
>>> -    slb->sps = NULL;
>>> -
>>> -    /* Is VRMA enabled ? */
>>> -    if (!ppc_hash64_use_vrma(env)) {
>>> -        return;
>>> -    }
>>> -
>>> -    /*
>>> -     * Make one up. Mostly ignore the ESID which will not be needed
>>> -     * for translation
>>> -     */
>>> -    lpcr = env->spr[SPR_LPCR];
>>> -    vsid = SLB_VSID_VRMA;
>>> -    vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
>>> -    vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
>>> -    esid = SLB_ESID_V;
>>> -
>>> -    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
>>> -        const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i];
>>> -
>>> -        if (!sps1->page_shift) {
>>> -            break;
>>> -        }
>>> -
>>> -        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
>>> -            sps = sps1;
>>> -            break;
>>> -        }
>>> -    }
>>> -
>>> -    if (!sps) {
>>> -        error_report("Bad page size encoding esid 0x"TARGET_FMT_lx
>>> -                     " vsid 0x"TARGET_FMT_lx, esid, vsid);
>>> -        return;
>>> -    }
>>> -
>>> -    slb->vsid = vsid;
>>> -    slb->esid = esid;
>>> -    slb->sps = sps;
>>> -}
>>> -
>>>    void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>>>    {
>>>        PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
>>>        CPUPPCState *env = &cpu->env;
>>>    
>>>        env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
>>> -    ppc_hash64_update_vrma(cpu);
>>>    }
>>>    
>>>    void helper_store_lpcr(CPUPPCState *env, target_ulong val)
>>>
>>
> 



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking
  2020-03-03  3:43 ` [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking David Gibson
  2020-03-05  8:56   ` Philippe Mathieu-Daudé
@ 2020-03-10 10:06   ` Cédric Le Goater
  2020-03-11  3:15     ` David Gibson
  1 sibling, 1 reply; 40+ messages in thread
From: Cédric Le Goater @ 2020-03-10 10:06 UTC (permalink / raw)
  To: David Gibson, qemu-ppc, qemu-devel, groug
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, Paolo Bonzini, paulus,
	Edgar E. Iglesias, Igor Mammedov

On 3/3/20 4:43 AM, David Gibson wrote:
> When we store the Logical Partitioning Control Register (LPCR) we have a
> big switch statement to work out which are valid bits for the cpu model
> we're emulating.
> 
> As well as being ugly, this isn't really conceptually correct, since it is
> based on the mmu_model variable, whereas the LPCR isn't (only) about the
> MMU, so mmu_model is basically just acting as a proxy for the cpu model.
> 
> Handle this in a simpler way, by adding a suitable lpcr_mask to the QOM
> class.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> ---
>  target/ppc/cpu-qom.h            |  1 +
>  target/ppc/mmu-hash64.c         | 36 ++-------------------------------
>  target/ppc/translate_init.inc.c | 27 +++++++++++++++++++++----
>  3 files changed, 26 insertions(+), 38 deletions(-)
> 
> diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
> index e499575dc8..15d6b54a7d 100644
> --- a/target/ppc/cpu-qom.h
> +++ b/target/ppc/cpu-qom.h
> @@ -177,6 +177,7 @@ typedef struct PowerPCCPUClass {
>      uint64_t insns_flags;
>      uint64_t insns_flags2;
>      uint64_t msr_mask;
> +    uint64_t lpcr_mask;         /* Available bits in the LPCR */
>      uint64_t lpcr_pm;           /* Power-saving mode Exit Cause Enable bits */
>      powerpc_mmu_t   mmu_model;
>      powerpc_excp_t  excp_model;
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index caf47ad6fc..0ef330a614 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -1095,42 +1095,10 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
>  
>  void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
>  {
> +    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
>      CPUPPCState *env = &cpu->env;
> -    uint64_t lpcr = 0;
>  
> -    /* Filter out bits */
> -    switch (env->mmu_model) {
> -    case POWERPC_MMU_2_03: /* P5p */
> -        lpcr = val & (LPCR_RMLS | LPCR_ILE |
> -                      LPCR_LPES0 | LPCR_LPES1 |
> -                      LPCR_RMI | LPCR_HDICE);
> -        break;
> -    case POWERPC_MMU_2_06: /* P7 */
> -        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
> -                      LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> -                      LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
> -                      LPCR_MER | LPCR_TC |
> -                      LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE);
> -        break;
> -    case POWERPC_MMU_2_07: /* P8 */
> -        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
> -                      LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> -                      LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
> -                      LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
> -                      LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE);
> -        break;
> -    case POWERPC_MMU_3_00: /* P9 */
> -        lpcr = val & (LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> -                      (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> -                      LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> -                      (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> -                      LPCR_DEE | LPCR_OEE)) | LPCR_MER | LPCR_GTSE | LPCR_TC |
> -                      LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE);
> -        break;
> -    default:
> -        g_assert_not_reached();
> -    }
> -    env->spr[SPR_LPCR] = lpcr;
> +    env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
>      ppc_hash64_update_rmls(cpu);
>      ppc_hash64_update_vrma(cpu);
>  }
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index f7acd3d61d..68aa4dfad8 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -8476,6 +8476,8 @@ POWERPC_FAMILY(POWER5P)(ObjectClass *oc, void *data)
>                      (1ull << MSR_DR) |
>                      (1ull << MSR_PMM) |
>                      (1ull << MSR_RI);
> +    pcc->lpcr_mask = LPCR_RMLS | LPCR_ILE | LPCR_LPES0 | LPCR_LPES1 |
> +        LPCR_RMI | LPCR_HDICE;
>      pcc->mmu_model = POWERPC_MMU_2_03;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> @@ -8614,6 +8616,12 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
>                      (1ull << MSR_PMM) |
>                      (1ull << MSR_RI) |
>                      (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
> +        LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> +        LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
> +        LPCR_MER | LPCR_TC |
> +        LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
>      pcc->mmu_model = POWERPC_MMU_2_06;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> @@ -8630,7 +8638,6 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
>      pcc->l1_dcache_size = 0x8000;
>      pcc->l1_icache_size = 0x8000;
>      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
>  }
>  
>  static void init_proc_POWER8(CPUPPCState *env)
> @@ -8785,6 +8792,13 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
>                      (1ull << MSR_TS0) |
>                      (1ull << MSR_TS1) |
>                      (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
> +        LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> +        LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
> +        LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
> +        LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
> +                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
>      pcc->mmu_model = POWERPC_MMU_2_07;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> @@ -8802,8 +8816,6 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
>      pcc->l1_dcache_size = 0x8000;
>      pcc->l1_icache_size = 0x8000;
>      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
> -                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
>  }
>  
>  #ifdef CONFIG_SOFTMMU
> @@ -8995,6 +9007,14 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
>                      (1ull << MSR_PMM) |
>                      (1ull << MSR_RI) |
>                      (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> +        (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> +        LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> +        (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> +                             LPCR_DEE | LPCR_OEE))
> +        | LPCR_MER | LPCR_GTSE | LPCR_TC |
> +        LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
>      pcc->mmu_model = POWERPC_MMU_3_00;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc64_v3_handle_mmu_fault;
> @@ -9014,7 +9034,6 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
>      pcc->l1_dcache_size = 0x8000;
>      pcc->l1_icache_size = 0x8000;
>      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
>  }
>  
>  #ifdef CONFIG_SOFTMMU
> 

David,

We forgot the POWER10 CPU. Could you squeeze the changes below in that 
patch ?

Thanks,

C. 

From a0d8cbc786c16b73376642f632cba99d75783da7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>
Date: Tue, 10 Mar 2020 11:02:40 +0100
Subject: [PATCH] target/ppc: Add a LPCR mask to POWER10 CPU
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fixes: 2d21e1e2a35c ("target/ppc: Use class fields to simplify LPCR masking")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/translate_init.inc.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
index 68aa4dfad875..0ae145e18d80 100644
--- a/target/ppc/translate_init.inc.c
+++ b/target/ppc/translate_init.inc.c
@@ -9224,6 +9224,14 @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
                     (1ull << MSR_PMM) |
                     (1ull << MSR_RI) |
                     (1ull << MSR_LE);
+    pcc->lpcr_mask = LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
+        (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
+        LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
+        (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
+                             LPCR_DEE | LPCR_OEE))
+        | LPCR_MER | LPCR_GTSE | LPCR_TC |
+        LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE;
+    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
     pcc->mmu_model = POWERPC_MMU_3_00;
 #if defined(CONFIG_SOFTMMU)
     pcc->handle_mmu_fault = ppc64_v3_handle_mmu_fault;
@@ -9242,7 +9250,6 @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
     pcc->l1_dcache_size = 0x8000;
     pcc->l1_icache_size = 0x8000;
     pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
-    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
 }
 
 #if !defined(CONFIG_USER_ONLY)
-- 
2.21.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking
  2020-03-10 10:06   ` Cédric Le Goater
@ 2020-03-11  3:15     ` David Gibson
  0 siblings, 0 replies; 40+ messages in thread
From: David Gibson @ 2020-03-11  3:15 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: lvivier, Thomas Huth, Xiao Guangrong, farosas, aik,
	Michael S. Tsirkin, Mark Cave-Ayland, qemu-devel, groug,
	Paolo Bonzini, qemu-ppc, Igor Mammedov, Edgar E. Iglesias,
	paulus

[-- Attachment #1: Type: text/plain, Size: 10673 bytes --]

On Tue, Mar 10, 2020 at 11:06:08AM +0100, Cédric Le Goater wrote:
> On 3/3/20 4:43 AM, David Gibson wrote:
> > When we store the Logical Partitioning Control Register (LPCR) we have a
> > big switch statement to work out which are valid bits for the cpu model
> > we're emulating.
> > 
> > As well as being ugly, this isn't really conceptually correct, since it is
> > based on the mmu_model variable, whereas the LPCR isn't (only) about the
> > MMU, so mmu_model is basically just acting as a proxy for the cpu model.
> > 
> > Handle this in a simpler way, by adding a suitable lpcr_mask to the QOM
> > class.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > Reviewed-by: Cédric Le Goater <clg@kaod.org>
> > Reviewed-by: Greg Kurz <groug@kaod.org>
> > ---
> >  target/ppc/cpu-qom.h            |  1 +
> >  target/ppc/mmu-hash64.c         | 36 ++-------------------------------
> >  target/ppc/translate_init.inc.c | 27 +++++++++++++++++++++----
> >  3 files changed, 26 insertions(+), 38 deletions(-)
> > 
> > diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
> > index e499575dc8..15d6b54a7d 100644
> > --- a/target/ppc/cpu-qom.h
> > +++ b/target/ppc/cpu-qom.h
> > @@ -177,6 +177,7 @@ typedef struct PowerPCCPUClass {
> >      uint64_t insns_flags;
> >      uint64_t insns_flags2;
> >      uint64_t msr_mask;
> > +    uint64_t lpcr_mask;         /* Available bits in the LPCR */
> >      uint64_t lpcr_pm;           /* Power-saving mode Exit Cause Enable bits */
> >      powerpc_mmu_t   mmu_model;
> >      powerpc_excp_t  excp_model;
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index caf47ad6fc..0ef330a614 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -1095,42 +1095,10 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> >  
> >  void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val)
> >  {
> > +    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
> >      CPUPPCState *env = &cpu->env;
> > -    uint64_t lpcr = 0;
> >  
> > -    /* Filter out bits */
> > -    switch (env->mmu_model) {
> > -    case POWERPC_MMU_2_03: /* P5p */
> > -        lpcr = val & (LPCR_RMLS | LPCR_ILE |
> > -                      LPCR_LPES0 | LPCR_LPES1 |
> > -                      LPCR_RMI | LPCR_HDICE);
> > -        break;
> > -    case POWERPC_MMU_2_06: /* P7 */
> > -        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
> > -                      LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> > -                      LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
> > -                      LPCR_MER | LPCR_TC |
> > -                      LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE);
> > -        break;
> > -    case POWERPC_MMU_2_07: /* P8 */
> > -        lpcr = val & (LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
> > -                      LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> > -                      LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
> > -                      LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
> > -                      LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE);
> > -        break;
> > -    case POWERPC_MMU_3_00: /* P9 */
> > -        lpcr = val & (LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> > -                      (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> > -                      LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> > -                      (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> > -                      LPCR_DEE | LPCR_OEE)) | LPCR_MER | LPCR_GTSE | LPCR_TC |
> > -                      LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE);
> > -        break;
> > -    default:
> > -        g_assert_not_reached();
> > -    }
> > -    env->spr[SPR_LPCR] = lpcr;
> > +    env->spr[SPR_LPCR] = val & pcc->lpcr_mask;
> >      ppc_hash64_update_rmls(cpu);
> >      ppc_hash64_update_vrma(cpu);
> >  }
> > diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> > index f7acd3d61d..68aa4dfad8 100644
> > --- a/target/ppc/translate_init.inc.c
> > +++ b/target/ppc/translate_init.inc.c
> > @@ -8476,6 +8476,8 @@ POWERPC_FAMILY(POWER5P)(ObjectClass *oc, void *data)
> >                      (1ull << MSR_DR) |
> >                      (1ull << MSR_PMM) |
> >                      (1ull << MSR_RI);
> > +    pcc->lpcr_mask = LPCR_RMLS | LPCR_ILE | LPCR_LPES0 | LPCR_LPES1 |
> > +        LPCR_RMI | LPCR_HDICE;
> >      pcc->mmu_model = POWERPC_MMU_2_03;
> >  #if defined(CONFIG_SOFTMMU)
> >      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> > @@ -8614,6 +8616,12 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
> >                      (1ull << MSR_PMM) |
> >                      (1ull << MSR_RI) |
> >                      (1ull << MSR_LE);
> > +    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_DPFD |
> > +        LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> > +        LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2 |
> > +        LPCR_MER | LPCR_TC |
> > +        LPCR_LPES0 | LPCR_LPES1 | LPCR_HDICE;
> > +    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
> >      pcc->mmu_model = POWERPC_MMU_2_06;
> >  #if defined(CONFIG_SOFTMMU)
> >      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> > @@ -8630,7 +8638,6 @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
> >      pcc->l1_dcache_size = 0x8000;
> >      pcc->l1_icache_size = 0x8000;
> >      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> > -    pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
> >  }
> >  
> >  static void init_proc_POWER8(CPUPPCState *env)
> > @@ -8785,6 +8792,13 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
> >                      (1ull << MSR_TS0) |
> >                      (1ull << MSR_TS1) |
> >                      (1ull << MSR_LE);
> > +    pcc->lpcr_mask = LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_KBV |
> > +        LPCR_DPFD | LPCR_VRMASD | LPCR_RMLS | LPCR_ILE |
> > +        LPCR_AIL | LPCR_ONL | LPCR_P8_PECE0 | LPCR_P8_PECE1 |
> > +        LPCR_P8_PECE2 | LPCR_P8_PECE3 | LPCR_P8_PECE4 |
> > +        LPCR_MER | LPCR_TC | LPCR_LPES0 | LPCR_HDICE;
> > +    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
> > +                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
> >      pcc->mmu_model = POWERPC_MMU_2_07;
> >  #if defined(CONFIG_SOFTMMU)
> >      pcc->handle_mmu_fault = ppc_hash64_handle_mmu_fault;
> > @@ -8802,8 +8816,6 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
> >      pcc->l1_dcache_size = 0x8000;
> >      pcc->l1_icache_size = 0x8000;
> >      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> > -    pcc->lpcr_pm = LPCR_P8_PECE0 | LPCR_P8_PECE1 | LPCR_P8_PECE2 |
> > -                   LPCR_P8_PECE3 | LPCR_P8_PECE4;
> >  }
> >  
> >  #ifdef CONFIG_SOFTMMU
> > @@ -8995,6 +9007,14 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
> >                      (1ull << MSR_PMM) |
> >                      (1ull << MSR_RI) |
> >                      (1ull << MSR_LE);
> > +    pcc->lpcr_mask = LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> > +        (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> > +        LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> > +        (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> > +                             LPCR_DEE | LPCR_OEE))
> > +        | LPCR_MER | LPCR_GTSE | LPCR_TC |
> > +        LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE;
> > +    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
> >      pcc->mmu_model = POWERPC_MMU_3_00;
> >  #if defined(CONFIG_SOFTMMU)
> >      pcc->handle_mmu_fault = ppc64_v3_handle_mmu_fault;
> > @@ -9014,7 +9034,6 @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
> >      pcc->l1_dcache_size = 0x8000;
> >      pcc->l1_icache_size = 0x8000;
> >      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> > -    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
> >  }
> >  
> >  #ifdef CONFIG_SOFTMMU
> > 
> 
> David,
> 
> We forgot the POWER10 CPU. Could you squeeze the changes below in that 
> patch ?

Oops, I thought I'd covered that, but apparently not.

I've squashed this into the original patch.


> 
> Thanks,
> 
> C. 
> 
> >From a0d8cbc786c16b73376642f632cba99d75783da7 Mon Sep 17 00:00:00 2001
> From: =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>
> Date: Tue, 10 Mar 2020 11:02:40 +0100
> Subject: [PATCH] target/ppc: Add a LPCR mask to POWER10 CPU
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
> 
> Fixes: 2d21e1e2a35c ("target/ppc: Use class fields to simplify LPCR masking")
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---
>  target/ppc/translate_init.inc.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index 68aa4dfad875..0ae145e18d80 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -9224,6 +9224,14 @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
>                      (1ull << MSR_PMM) |
>                      (1ull << MSR_RI) |
>                      (1ull << MSR_LE);
> +    pcc->lpcr_mask = LPCR_VPM1 | LPCR_ISL | LPCR_KBV | LPCR_DPFD |
> +        (LPCR_PECE_U_MASK & LPCR_HVEE) | LPCR_ILE | LPCR_AIL |
> +        LPCR_UPRT | LPCR_EVIRT | LPCR_ONL | LPCR_HR | LPCR_LD |
> +        (LPCR_PECE_L_MASK & (LPCR_PDEE | LPCR_HDEE | LPCR_EEE |
> +                             LPCR_DEE | LPCR_OEE))
> +        | LPCR_MER | LPCR_GTSE | LPCR_TC |
> +        LPCR_HEIC | LPCR_LPES0 | LPCR_HVICE | LPCR_HDICE;
> +    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
>      pcc->mmu_model = POWERPC_MMU_3_00;
>  #if defined(CONFIG_SOFTMMU)
>      pcc->handle_mmu_fault = ppc64_v3_handle_mmu_fault;
> @@ -9242,7 +9250,6 @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
>      pcc->l1_dcache_size = 0x8000;
>      pcc->l1_icache_size = 0x8000;
>      pcc->interrupts_big_endian = ppc_cpu_interrupts_big_endian_lpcr;
> -    pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
>  }
>  
>  #if !defined(CONFIG_USER_ONLY)

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2020-03-11  3:33 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-03  3:43 [PATCH v7 00/17] target/ppc: Correct some errors with real mode handling David Gibson
2020-03-03  3:43 ` [PATCH v7 01/17] ppc: Remove stub support for 32-bit hypervisor mode David Gibson
2020-03-05  8:58   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 02/17] ppc: Remove stub of PPC970 HID4 implementation David Gibson
2020-03-03  3:43 ` [PATCH v7 03/17] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU David Gibson
2020-03-03  3:43 ` [PATCH v7 04/17] target/ppc: Introduce ppc_hash64_use_vrma() helper David Gibson
2020-03-05  8:59   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 05/17] spapr, ppc: Remove VPM0/RMLS hacks for POWER9 David Gibson
2020-03-03  3:43 ` [PATCH v7 06/17] target/ppc: Remove RMOR register from POWER9 & POWER10 David Gibson
2020-03-03  3:43 ` [PATCH v7 07/17] target/ppc: Use class fields to simplify LPCR masking David Gibson
2020-03-05  8:56   ` Philippe Mathieu-Daudé
2020-03-10 10:06   ` Cédric Le Goater
2020-03-11  3:15     ` David Gibson
2020-03-03  3:43 ` [PATCH v7 08/17] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS] David Gibson
2020-03-03  7:52   ` Greg Kurz
2020-03-05  8:53   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 09/17] target/ppc: Correct RMLS table David Gibson
2020-03-03  3:43 ` [PATCH v7 10/17] target/ppc: Only calculate RMLS derived RMA limit on demand David Gibson
2020-03-03  8:57   ` Greg Kurz
2020-03-05  8:48   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently David Gibson
2020-03-03  9:37   ` Greg Kurz
2020-03-05  9:17   ` Philippe Mathieu-Daudé
2020-03-05  9:47     ` Greg Kurz
2020-03-05 10:35       ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 12/17] spapr: Don't use weird units for MIN_RMA_SLOF David Gibson
2020-03-05  8:39   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 13/17] spapr,ppc: Simplify signature of kvmppc_rma_size() David Gibson
2020-03-05  8:47   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 14/17] spapr: Don't attempt to clamp RMA to VRMA constraint David Gibson
2020-03-03  9:55   ` Greg Kurz
2020-03-03  3:43 ` [PATCH v7 15/17] spapr: Don't clamp RMA to 16GiB on new machine types David Gibson
2020-03-03 10:02   ` Greg Kurz
2020-03-05  8:45   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 16/17] spapr: Clean up RMA size calculation David Gibson
2020-03-03 10:18   ` Greg Kurz
2020-03-05  8:43   ` Philippe Mathieu-Daudé
2020-03-03  3:43 ` [PATCH v7 17/17] spapr: Fold spapr_node0_size() into its only caller David Gibson
2020-03-03 10:32   ` Greg Kurz
2020-03-04  1:25     ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.